Feb 21 06:47:03 crc systemd[1]: Starting Kubernetes Kubelet... Feb 21 06:47:03 crc restorecon[4750]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 21 06:47:05 crc kubenswrapper[4820]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 21 06:47:05 crc kubenswrapper[4820]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 21 06:47:05 crc kubenswrapper[4820]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 21 06:47:05 crc kubenswrapper[4820]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 21 06:47:05 crc kubenswrapper[4820]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 21 06:47:05 crc kubenswrapper[4820]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.415999 4820 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423295 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423331 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423344 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423356 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423368 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423380 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423391 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423403 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423413 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423429 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423442 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423454 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423467 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423477 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423489 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423501 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423512 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423526 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423537 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423549 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423558 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423566 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423575 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423583 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423592 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423601 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423609 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423618 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423626 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423638 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423648 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423658 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423668 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423677 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423686 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423694 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423702 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423711 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423721 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423732 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423740 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423748 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423756 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423765 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423773 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423781 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423789 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423798 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423807 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423816 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423827 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423837 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423846 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423855 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423863 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423871 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423884 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423893 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423902 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423912 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423922 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423930 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423939 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423947 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423956 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423966 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423975 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423985 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423993 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.424003 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.424012 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424164 4820 flags.go:64] FLAG: --address="0.0.0.0" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424181 4820 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424200 4820 flags.go:64] FLAG: --anonymous-auth="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424213 4820 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424225 4820 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424265 4820 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424279 4820 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424291 4820 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424301 4820 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424311 4820 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424321 4820 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424331 4820 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424342 4820 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424352 4820 flags.go:64] FLAG: --cgroup-root="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424363 4820 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424373 4820 flags.go:64] FLAG: --client-ca-file="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424382 4820 flags.go:64] FLAG: --cloud-config="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424392 4820 flags.go:64] FLAG: --cloud-provider="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424401 4820 flags.go:64] FLAG: --cluster-dns="[]" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424414 4820 flags.go:64] FLAG: --cluster-domain="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424423 4820 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424434 4820 flags.go:64] FLAG: --config-dir="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424443 4820 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424453 4820 flags.go:64] FLAG: --container-log-max-files="5" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424466 4820 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424475 4820 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424485 4820 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424495 4820 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424505 4820 flags.go:64] FLAG: --contention-profiling="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424515 4820 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424525 4820 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424535 4820 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424546 4820 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424558 4820 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424567 4820 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424577 4820 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424587 4820 flags.go:64] FLAG: --enable-load-reader="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424596 4820 flags.go:64] FLAG: --enable-server="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424606 4820 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424617 4820 flags.go:64] FLAG: --event-burst="100" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424628 4820 flags.go:64] FLAG: --event-qps="50" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424637 4820 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424647 4820 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424656 4820 flags.go:64] FLAG: --eviction-hard="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424668 4820 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424678 4820 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424687 4820 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424696 4820 flags.go:64] FLAG: --eviction-soft="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424706 4820 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424715 4820 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424726 4820 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424735 4820 flags.go:64] FLAG: --experimental-mounter-path="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424744 4820 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424754 4820 flags.go:64] FLAG: --fail-swap-on="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424763 4820 flags.go:64] FLAG: --feature-gates="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424776 4820 flags.go:64] FLAG: --file-check-frequency="20s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424785 4820 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424795 4820 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424804 4820 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424814 4820 flags.go:64] FLAG: --healthz-port="10248" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424824 4820 flags.go:64] FLAG: --help="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424834 4820 flags.go:64] FLAG: --hostname-override="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424847 4820 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424857 4820 flags.go:64] FLAG: --http-check-frequency="20s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424866 4820 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424876 4820 flags.go:64] FLAG: --image-credential-provider-config="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424885 4820 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424896 4820 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424906 4820 flags.go:64] FLAG: --image-service-endpoint="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424916 4820 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424925 4820 flags.go:64] FLAG: --kube-api-burst="100" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424935 4820 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424945 4820 flags.go:64] FLAG: --kube-api-qps="50" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424954 4820 flags.go:64] FLAG: --kube-reserved="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424963 4820 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424972 4820 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424983 4820 flags.go:64] FLAG: --kubelet-cgroups="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424992 4820 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425001 4820 flags.go:64] FLAG: --lock-file="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425011 4820 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425020 4820 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425030 4820 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425044 4820 flags.go:64] FLAG: --log-json-split-stream="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425054 4820 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425063 4820 flags.go:64] FLAG: --log-text-split-stream="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425072 4820 flags.go:64] FLAG: --logging-format="text" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425082 4820 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425092 4820 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425102 4820 flags.go:64] FLAG: --manifest-url="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425111 4820 flags.go:64] FLAG: --manifest-url-header="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425123 4820 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425133 4820 flags.go:64] FLAG: --max-open-files="1000000" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425145 4820 flags.go:64] FLAG: --max-pods="110" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425154 4820 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425165 4820 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425176 4820 flags.go:64] FLAG: --memory-manager-policy="None" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425185 4820 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425196 4820 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425205 4820 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425215 4820 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425262 4820 flags.go:64] FLAG: --node-status-max-images="50" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425272 4820 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425283 4820 flags.go:64] FLAG: --oom-score-adj="-999" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425293 4820 flags.go:64] FLAG: --pod-cidr="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425304 4820 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425318 4820 flags.go:64] FLAG: --pod-manifest-path="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425327 4820 flags.go:64] FLAG: --pod-max-pids="-1" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425338 4820 flags.go:64] FLAG: --pods-per-core="0" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425348 4820 flags.go:64] FLAG: --port="10250" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425358 4820 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425368 4820 flags.go:64] FLAG: --provider-id="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425378 4820 flags.go:64] FLAG: --qos-reserved="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425387 4820 flags.go:64] FLAG: --read-only-port="10255" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425398 4820 flags.go:64] FLAG: --register-node="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425408 4820 flags.go:64] FLAG: --register-schedulable="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425418 4820 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425433 4820 flags.go:64] FLAG: --registry-burst="10" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425443 4820 flags.go:64] FLAG: --registry-qps="5" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425453 4820 flags.go:64] FLAG: --reserved-cpus="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425464 4820 flags.go:64] FLAG: --reserved-memory="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425476 4820 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425486 4820 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425496 4820 flags.go:64] FLAG: --rotate-certificates="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425506 4820 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425516 4820 flags.go:64] FLAG: --runonce="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425526 4820 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425536 4820 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425547 4820 flags.go:64] FLAG: --seccomp-default="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425557 4820 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425567 4820 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425577 4820 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425587 4820 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425597 4820 flags.go:64] FLAG: --storage-driver-password="root" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425607 4820 flags.go:64] FLAG: --storage-driver-secure="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425617 4820 flags.go:64] FLAG: --storage-driver-table="stats" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425627 4820 flags.go:64] FLAG: --storage-driver-user="root" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425636 4820 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425647 4820 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425656 4820 flags.go:64] FLAG: --system-cgroups="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425667 4820 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425683 4820 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425692 4820 flags.go:64] FLAG: --tls-cert-file="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425701 4820 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425714 4820 flags.go:64] FLAG: --tls-min-version="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425724 4820 flags.go:64] FLAG: --tls-private-key-file="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425733 4820 flags.go:64] FLAG: --topology-manager-policy="none" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425743 4820 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425752 4820 flags.go:64] FLAG: --topology-manager-scope="container" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425762 4820 flags.go:64] FLAG: --v="2" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425773 4820 flags.go:64] FLAG: --version="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425785 4820 flags.go:64] FLAG: --vmodule="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425796 4820 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425806 4820 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426079 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426093 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426102 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426111 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426120 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426129 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426139 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426148 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426158 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426166 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426175 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426183 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426192 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426200 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426208 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426217 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426225 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426233 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426267 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426276 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426284 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426292 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426301 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426317 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426326 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426334 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426342 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426350 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426362 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426372 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426383 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426393 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426405 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426416 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426459 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426472 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426487 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426500 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426513 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426526 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426536 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426544 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426553 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426562 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426570 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426578 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426587 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426595 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426603 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426612 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426620 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426628 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426637 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426646 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426665 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426680 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426689 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426700 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426710 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426734 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426745 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426753 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426765 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426775 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426786 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426795 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426805 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426814 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426822 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426831 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426840 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.426864 4820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.443324 4820 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.443369 4820 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443507 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443519 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443529 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443542 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443567 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443577 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443587 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443596 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443605 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443615 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443623 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443665 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443676 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443687 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443696 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443705 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443713 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443721 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443729 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443738 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443747 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443755 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443763 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443771 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443780 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443789 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443797 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443806 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443814 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443822 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443831 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443842 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443853 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443862 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443873 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443883 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443891 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443901 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443910 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443920 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443930 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443941 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443953 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443964 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443976 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443991 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444005 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444019 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444031 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444042 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444051 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444059 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444068 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444076 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444085 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444093 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444101 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444110 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444118 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444126 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444135 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444143 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444152 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444160 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444169 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444177 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444186 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444194 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444202 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444210 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444220 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.444263 4820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444555 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444574 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444588 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444601 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444612 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444622 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444634 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444649 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444661 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444673 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444686 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444697 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444707 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444718 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444728 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444738 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444749 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444761 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444772 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444784 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444794 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444804 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444814 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444824 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444835 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444846 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444856 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444866 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444876 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444889 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444902 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444914 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444925 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444936 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444949 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444960 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444972 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444982 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444993 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445004 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445015 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445025 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445035 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445046 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445056 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445068 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445082 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445095 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445105 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445117 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445129 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445140 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445152 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445163 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445174 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445329 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445344 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445354 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445365 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445375 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445385 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445396 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445407 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445417 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445428 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445438 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445449 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445459 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445470 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445480 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445494 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.445511 4820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.446830 4820 server.go:940] "Client rotation is on, will bootstrap in background" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.459476 4820 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.459666 4820 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.462024 4820 server.go:997] "Starting client certificate rotation" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.462080 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.464718 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-12 09:43:31.263871876 +0000 UTC Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.464884 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.495982 4820 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.500877 4820 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.504922 4820 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.524037 4820 log.go:25] "Validated CRI v1 runtime API" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.567629 4820 log.go:25] "Validated CRI v1 image API" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.570285 4820 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.576965 4820 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-21-06-42-25-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.577017 4820 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.595981 4820 manager.go:217] Machine: {Timestamp:2026-02-21 06:47:05.593073984 +0000 UTC m=+0.626158222 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:ec2c7a4f-4f2f-4567-9af1-65fc234d8f80 BootID:e79a2b5c-f808-4b7b-b373-103b6d673828 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:56:f4:28 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:56:f4:28 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:59:86:fd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e9:ea:1f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ee:9f:82 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2f:b3:1f Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:b4:57:9b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:be:b3:2c:d2:fa:b6 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:01:18:f8:bc:c2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.596382 4820 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.596528 4820 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.597747 4820 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.598676 4820 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.599165 4820 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.599866 4820 topology_manager.go:138] "Creating topology manager with none policy" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.599878 4820 container_manager_linux.go:303] "Creating device plugin manager" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.600470 4820 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.600509 4820 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.600728 4820 state_mem.go:36] "Initialized new in-memory state store" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.601141 4820 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.606385 4820 kubelet.go:418] "Attempting to sync node with API server" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.606413 4820 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.606432 4820 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.606449 4820 kubelet.go:324] "Adding apiserver pod source" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.606472 4820 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.612950 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.612933 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.613146 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.613158 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.613428 4820 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.614891 4820 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.616881 4820 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618645 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618692 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618709 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618724 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618745 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618759 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618773 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618794 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618811 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618826 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618844 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618858 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.620657 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.621336 4820 server.go:1280] "Started kubelet" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.622613 4820 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.622731 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.622857 4820 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 21 06:47:05 crc systemd[1]: Started Kubernetes Kubelet. Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.623457 4820 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.624946 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.624995 4820 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.625040 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:59:32.282021173 +0000 UTC Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.625194 4820 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.625226 4820 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.625362 4820 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.625363 4820 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.625977 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.626056 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.626685 4820 server.go:460] "Adding debug handlers to kubelet server" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.626697 4820 factory.go:153] Registering CRI-O factory Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.626821 4820 factory.go:221] Registration of the crio container factory successfully Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.626914 4820 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.626935 4820 factory.go:55] Registering systemd factory Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.626947 4820 factory.go:221] Registration of the systemd container factory successfully Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.626983 4820 factory.go:103] Registering Raw factory Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.627005 4820 manager.go:1196] Started watching for new ooms in manager Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.627249 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="200ms" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.628078 4820 manager.go:319] Starting recovery of all containers Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.630612 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18963021e9321342 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-21 06:47:05.621295938 +0000 UTC m=+0.654380166,LastTimestamp:2026-02-21 06:47:05.621295938 +0000 UTC m=+0.654380166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.646191 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.646398 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.646411 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.646422 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.646431 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.646439 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648246 4820 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648312 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648323 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648336 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648351 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648365 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648377 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648386 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648398 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648408 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648421 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648428 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648436 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648444 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648452 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648460 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648468 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648478 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648487 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648497 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648506 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648540 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648549 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648560 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648568 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648577 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648592 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648603 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648613 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648622 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648631 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648642 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648651 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648671 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648681 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648691 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648702 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648713 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648723 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648733 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648742 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648753 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648773 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648794 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648807 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648823 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648835 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648886 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648899 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648911 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648924 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648935 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648946 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648956 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648967 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649007 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649018 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649029 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649040 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649050 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649060 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649071 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649080 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649091 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649100 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649110 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649131 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649141 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649151 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649162 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649172 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649181 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649190 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649200 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649209 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649219 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649228 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649253 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649264 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649290 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649299 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649309 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649319 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649330 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649344 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649366 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649424 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649436 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649449 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649464 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649477 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649489 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649498 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649511 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649522 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649531 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649542 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649551 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649562 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649577 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649589 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649599 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649610 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649620 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649630 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649643 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649653 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649664 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649674 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649685 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649694 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649722 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649732 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649742 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649754 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649764 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649799 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649808 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649818 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649828 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649838 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649848 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649860 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649870 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649880 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649891 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649907 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649918 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649955 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649966 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649976 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649985 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649995 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650006 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650017 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650028 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650038 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650051 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650061 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650070 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650081 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650092 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650104 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650115 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650126 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650136 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650145 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650154 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650166 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650177 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650188 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650198 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650208 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650217 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650227 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650258 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650270 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650283 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650294 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650305 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650315 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650326 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650339 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650352 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650363 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650377 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650389 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650401 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650413 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650426 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650437 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650446 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650457 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650468 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650478 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650488 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650498 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650507 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650518 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650528 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650541 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650551 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650561 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650570 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650579 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650588 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650598 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650607 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650629 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650640 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650650 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650666 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650677 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650691 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650701 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650711 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650721 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650732 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650742 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650754 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650777 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650791 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650802 4820 reconstruct.go:97] "Volume reconstruction finished" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650811 4820 reconciler.go:26] "Reconciler: start to sync state" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.659976 4820 manager.go:324] Recovery completed Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.680282 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.683033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.683073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.683084 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.683906 4820 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.683938 4820 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.683968 4820 state_mem.go:36] "Initialized new in-memory state store" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.692309 4820 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.695299 4820 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.695368 4820 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.695423 4820 kubelet.go:2335] "Starting kubelet main sync loop" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.695509 4820 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.697823 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.697992 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.707223 4820 policy_none.go:49] "None policy: Start" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.708545 4820 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.708585 4820 state_mem.go:35] "Initializing new in-memory state store" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.725585 4820 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.763219 4820 manager.go:334] "Starting Device Plugin manager" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.763293 4820 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.763337 4820 server.go:79] "Starting device plugin registration server" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.764037 4820 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.764093 4820 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.764328 4820 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.764450 4820 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.764470 4820 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.772328 4820 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.796591 4820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.796699 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.798178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.798310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.798334 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.798617 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.798808 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.798888 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800112 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800153 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800233 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800414 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800535 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800572 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801232 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801290 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801474 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801667 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801745 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.802305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.802340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.802352 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.802469 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.802579 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.803008 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.805987 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.806035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.806051 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.805995 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.806205 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.806256 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.807106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.807138 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.807153 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.807510 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.807549 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.808689 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.808729 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.808741 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.828408 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="400ms" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852207 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852271 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852310 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852349 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852450 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852535 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852559 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852662 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852718 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852746 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852770 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852792 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852862 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.864292 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.865707 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.865753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.865772 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.865806 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.866337 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954118 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954174 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954203 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954225 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954266 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954288 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954310 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954333 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954357 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954381 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954379 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954405 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954426 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954453 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954473 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954495 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954474 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954566 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954622 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954634 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954492 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954684 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954687 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954730 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954738 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954577 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954682 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954769 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954823 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.067288 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.069008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.069083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.069102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.069148 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 06:47:06 crc kubenswrapper[4820]: E0221 06:47:06.069924 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.143906 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.159095 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.169374 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.199476 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:06 crc kubenswrapper[4820]: W0221 06:47:06.206646 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-da5c6f427571486481381c8437f6eb521f56a314e2aa5272089a4846562f23f2 WatchSource:0}: Error finding container da5c6f427571486481381c8437f6eb521f56a314e2aa5272089a4846562f23f2: Status 404 returned error can't find the container with id da5c6f427571486481381c8437f6eb521f56a314e2aa5272089a4846562f23f2 Feb 21 06:47:06 crc kubenswrapper[4820]: W0221 06:47:06.208374 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-bd0abe4b188836e3c1980548206556710f4ef5a7592e2a303077351a17fb8aa8 WatchSource:0}: Error finding container bd0abe4b188836e3c1980548206556710f4ef5a7592e2a303077351a17fb8aa8: Status 404 returned error can't find the container with id bd0abe4b188836e3c1980548206556710f4ef5a7592e2a303077351a17fb8aa8 Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.210748 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:06 crc kubenswrapper[4820]: W0221 06:47:06.217704 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a24610a1c1a1fc78062203199ef3c970b8fd8661ac92796d796dd46f3b6c9239 WatchSource:0}: Error finding container a24610a1c1a1fc78062203199ef3c970b8fd8661ac92796d796dd46f3b6c9239: Status 404 returned error can't find the container with id a24610a1c1a1fc78062203199ef3c970b8fd8661ac92796d796dd46f3b6c9239 Feb 21 06:47:06 crc kubenswrapper[4820]: W0221 06:47:06.225848 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ea6691f20617cdaf4a358b75a75dbfe19a707406daf14a975073611266346f7e WatchSource:0}: Error finding container ea6691f20617cdaf4a358b75a75dbfe19a707406daf14a975073611266346f7e: Status 404 returned error can't find the container with id ea6691f20617cdaf4a358b75a75dbfe19a707406daf14a975073611266346f7e Feb 21 06:47:06 crc kubenswrapper[4820]: E0221 06:47:06.229831 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="800ms" Feb 21 06:47:06 crc kubenswrapper[4820]: W0221 06:47:06.238657 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4356aaaf2366d8537375af947a6348d17f94fe09f711e85b04378475e77c4837 WatchSource:0}: Error finding container 4356aaaf2366d8537375af947a6348d17f94fe09f711e85b04378475e77c4837: Status 404 returned error can't find the container with id 4356aaaf2366d8537375af947a6348d17f94fe09f711e85b04378475e77c4837 Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.470300 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.471581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.471632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.471645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.471668 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 06:47:06 crc kubenswrapper[4820]: E0221 06:47:06.472342 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.624411 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.625443 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:35:08.309466497 +0000 UTC Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.701803 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4356aaaf2366d8537375af947a6348d17f94fe09f711e85b04378475e77c4837"} Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.703524 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ea6691f20617cdaf4a358b75a75dbfe19a707406daf14a975073611266346f7e"} Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.705404 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a24610a1c1a1fc78062203199ef3c970b8fd8661ac92796d796dd46f3b6c9239"} Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.707258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bd0abe4b188836e3c1980548206556710f4ef5a7592e2a303077351a17fb8aa8"} Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.708590 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"da5c6f427571486481381c8437f6eb521f56a314e2aa5272089a4846562f23f2"} Feb 21 06:47:06 crc kubenswrapper[4820]: W0221 06:47:06.859000 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:06 crc kubenswrapper[4820]: E0221 06:47:06.859330 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:07 crc kubenswrapper[4820]: W0221 06:47:07.000451 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:07 crc kubenswrapper[4820]: E0221 06:47:07.000545 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:07 crc kubenswrapper[4820]: E0221 06:47:07.030568 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="1.6s" Feb 21 06:47:07 crc kubenswrapper[4820]: W0221 06:47:07.056344 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:07 crc kubenswrapper[4820]: E0221 06:47:07.056427 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:07 crc kubenswrapper[4820]: W0221 06:47:07.070669 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:07 crc kubenswrapper[4820]: E0221 06:47:07.070901 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.273218 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.275550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.275634 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.275652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.275717 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 06:47:07 crc kubenswrapper[4820]: E0221 06:47:07.276427 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.521970 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 21 06:47:07 crc kubenswrapper[4820]: E0221 06:47:07.523301 4820 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.624512 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.625663 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:59:23.26955843 +0000 UTC Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.712371 4820 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0" exitCode=0 Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.712452 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.712474 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.713320 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.713344 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.713352 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.716785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.716819 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.716831 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.716842 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.716875 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.718482 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.718531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.718547 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.719567 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff" exitCode=0 Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.719732 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.720059 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.720764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.720825 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.720853 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.722868 4820 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1d939ddef7c34f71808d30ff7720850717a4199e4ea4819f5499040b68c80903" exitCode=0 Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.722971 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1d939ddef7c34f71808d30ff7720850717a4199e4ea4819f5499040b68c80903"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.723027 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.725971 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.726011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.726034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.726663 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.728966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.729023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.729046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.730296 4820 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4" exitCode=0 Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.730352 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.730448 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.732348 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.732371 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.732381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.196874 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.624223 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.626264 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:14:41.001274719 +0000 UTC Feb 21 06:47:08 crc kubenswrapper[4820]: E0221 06:47:08.631756 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="3.2s" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.733906 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.733958 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.733973 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.734076 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.735102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.735131 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.735143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.738189 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.738217 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.738231 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.738267 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.738278 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.738360 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.739096 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.739120 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.739131 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.740668 4820 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="aaf7562373015648060c40542c1d56ffebf82fbf72137a679b9bad32eca02126" exitCode=0 Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.740717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"aaf7562373015648060c40542c1d56ffebf82fbf72137a679b9bad32eca02126"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.740804 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.741503 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.741528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.741538 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.743736 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.743776 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.743716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.749943 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.749982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.749992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.749956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.750083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.750101 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.877287 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.878423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.878480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.878493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.878520 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 06:47:08 crc kubenswrapper[4820]: E0221 06:47:08.879050 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.354506 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.626617 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:46:33.72634295 +0000 UTC Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749150 4820 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="79304ceda3b0c42e04de9bcaaa0aebb6dc0b6c2e659f8a7aecae0478eaccb23e" exitCode=0 Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749286 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749297 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749325 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749348 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749402 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749435 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749513 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749756 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"79304ceda3b0c42e04de9bcaaa0aebb6dc0b6c2e659f8a7aecae0478eaccb23e"} Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751205 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751318 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751338 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751360 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751397 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751490 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.752489 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.752529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.752542 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.626772 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:47:47.220441323 +0000 UTC Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.757628 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.758229 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d66121d9b9a4a8e1e35a08932ab77167bea6664ba299d44ac1aa1b387d631e9b"} Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.758300 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"544001f28a6a7bcbc04077600b5db500bfe354da92376c5d8fbeb514da8d163a"} Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.758316 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7e23741c116b20774f6b21bc77a91cd8506f8c10b81c704733c941ae0d8cec77"} Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.758332 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"012a5c9a54a954a3807cb00fa356acfd255ce1ca4b456e6c6061caeb33c66d52"} Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.758655 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.758694 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.758710 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:10.999986 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.000148 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.000188 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.001339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.001370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.001380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.627473 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:32:06.046233741 +0000 UTC Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.644898 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.702616 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.766555 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"43d80be6c691e1caf02784f2a9617100c9d819907346bf869b267c7a6e0a5a23"} Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.766575 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.766663 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.766687 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.768278 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.768306 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.768315 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.768416 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.768446 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.768462 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.079214 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.080878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.080927 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.080944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.080978 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.354526 4820 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.354628 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.628277 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 17:49:18.39079604 +0000 UTC Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.768843 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.769683 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.769712 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.769721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:13 crc kubenswrapper[4820]: I0221 06:47:13.629355 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:14:22.546759408 +0000 UTC Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.189595 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.190398 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.191643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.191693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.191718 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.441764 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.441926 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.443015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.443045 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.443055 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.489758 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.489908 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.491056 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.491084 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.491094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.630146 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:07:00.897194449 +0000 UTC Feb 21 06:47:15 crc kubenswrapper[4820]: I0221 06:47:15.631006 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:24:42.600688496 +0000 UTC Feb 21 06:47:15 crc kubenswrapper[4820]: E0221 06:47:15.772649 4820 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 21 06:47:16 crc kubenswrapper[4820]: I0221 06:47:16.106747 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:16 crc kubenswrapper[4820]: I0221 06:47:16.106966 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:16 crc kubenswrapper[4820]: I0221 06:47:16.108182 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:16 crc kubenswrapper[4820]: I0221 06:47:16.108222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:16 crc kubenswrapper[4820]: I0221 06:47:16.108261 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:16 crc kubenswrapper[4820]: I0221 06:47:16.631761 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 21:49:51.443426862 +0000 UTC Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.591431 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.591688 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.593511 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.593570 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.593588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.599740 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.632385 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 03:56:04.895825752 +0000 UTC Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.780487 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.781402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.781441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.781453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.787695 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:18 crc kubenswrapper[4820]: I0221 06:47:18.632582 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:37:03.7575183 +0000 UTC Feb 21 06:47:18 crc kubenswrapper[4820]: I0221 06:47:18.783688 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:18 crc kubenswrapper[4820]: I0221 06:47:18.784866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:18 crc kubenswrapper[4820]: I0221 06:47:18.784917 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:18 crc kubenswrapper[4820]: I0221 06:47:18.784933 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:19 crc kubenswrapper[4820]: W0221 06:47:19.482140 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.482315 4820 trace.go:236] Trace[232516160]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Feb-2026 06:47:09.480) (total time: 10001ms): Feb 21 06:47:19 crc kubenswrapper[4820]: Trace[232516160]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:47:19.482) Feb 21 06:47:19 crc kubenswrapper[4820]: Trace[232516160]: [10.001947768s] [10.001947768s] END Feb 21 06:47:19 crc kubenswrapper[4820]: E0221 06:47:19.482355 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 21 06:47:19 crc kubenswrapper[4820]: W0221 06:47:19.566789 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.566906 4820 trace.go:236] Trace[1886156297]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Feb-2026 06:47:09.565) (total time: 10001ms): Feb 21 06:47:19 crc kubenswrapper[4820]: Trace[1886156297]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:47:19.566) Feb 21 06:47:19 crc kubenswrapper[4820]: Trace[1886156297]: [10.001257169s] [10.001257169s] END Feb 21 06:47:19 crc kubenswrapper[4820]: E0221 06:47:19.566943 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.625391 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.633616 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:47:03.642941269 +0000 UTC Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.788792 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.791138 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19" exitCode=255 Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.791193 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19"} Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.791387 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.792369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.792442 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.792469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.793550 4820 scope.go:117] "RemoveContainer" containerID="3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19" Feb 21 06:47:19 crc kubenswrapper[4820]: W0221 06:47:19.906768 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.906847 4820 trace.go:236] Trace[2009007637]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Feb-2026 06:47:09.904) (total time: 10002ms): Feb 21 06:47:19 crc kubenswrapper[4820]: Trace[2009007637]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:47:19.906) Feb 21 06:47:19 crc kubenswrapper[4820]: Trace[2009007637]: [10.002032812s] [10.002032812s] END Feb 21 06:47:19 crc kubenswrapper[4820]: E0221 06:47:19.906871 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.054046 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:20 crc kubenswrapper[4820]: W0221 06:47:20.057474 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.057561 4820 trace.go:236] Trace[436150591]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Feb-2026 06:47:10.056) (total time: 10001ms): Feb 21 06:47:20 crc kubenswrapper[4820]: Trace[436150591]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:47:20.057) Feb 21 06:47:20 crc kubenswrapper[4820]: Trace[436150591]: [10.001331171s] [10.001331171s] END Feb 21 06:47:20 crc kubenswrapper[4820]: E0221 06:47:20.057582 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.402045 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.402225 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.403301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.403337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.403349 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.448201 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.634215 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:26:17.941324162 +0000 UTC Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.652730 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.652788 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.656329 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.656384 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.795060 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.796582 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff"} Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.796643 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.796643 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.797437 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.797471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.797480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.797640 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.797670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.797683 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.808315 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.005581 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]log ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]etcd ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/generic-apiserver-start-informers ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/priority-and-fairness-filter ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-apiextensions-informers ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-apiextensions-controllers ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/crd-informer-synced ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-system-namespaces-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 21 06:47:21 crc kubenswrapper[4820]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 21 06:47:21 crc kubenswrapper[4820]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/bootstrap-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-kube-aggregator-informers ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/apiservice-registration-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/apiservice-discovery-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]autoregister-completion ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/apiservice-openapi-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: livez check failed Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.005648 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.635021 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 11:50:38.864133032 +0000 UTC Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.799399 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.799517 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.799589 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.800395 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.800431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.800442 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.800835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.800865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.800878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:22 crc kubenswrapper[4820]: I0221 06:47:22.356390 4820 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 21 06:47:22 crc kubenswrapper[4820]: I0221 06:47:22.356566 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 21 06:47:22 crc kubenswrapper[4820]: I0221 06:47:22.635613 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:51:02.612235175 +0000 UTC Feb 21 06:47:22 crc kubenswrapper[4820]: I0221 06:47:22.802337 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:22 crc kubenswrapper[4820]: I0221 06:47:22.803513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:22 crc kubenswrapper[4820]: I0221 06:47:22.803538 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:22 crc kubenswrapper[4820]: I0221 06:47:22.803546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:23 crc kubenswrapper[4820]: I0221 06:47:23.635748 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:50:27.343365319 +0000 UTC Feb 21 06:47:24 crc kubenswrapper[4820]: I0221 06:47:24.215284 4820 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 21 06:47:24 crc kubenswrapper[4820]: I0221 06:47:24.636368 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 08:26:41.20591746 +0000 UTC Feb 21 06:47:25 crc kubenswrapper[4820]: E0221 06:47:25.632232 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 21 06:47:25 crc kubenswrapper[4820]: I0221 06:47:25.636537 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:11:35.487678871 +0000 UTC Feb 21 06:47:25 crc kubenswrapper[4820]: E0221 06:47:25.637464 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 21 06:47:25 crc kubenswrapper[4820]: I0221 06:47:25.637578 4820 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 21 06:47:25 crc kubenswrapper[4820]: I0221 06:47:25.653255 4820 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 21 06:47:25 crc kubenswrapper[4820]: I0221 06:47:25.673932 4820 csr.go:261] certificate signing request csr-b45vm is approved, waiting to be issued Feb 21 06:47:25 crc kubenswrapper[4820]: I0221 06:47:25.687565 4820 csr.go:257] certificate signing request csr-b45vm is issued Feb 21 06:47:25 crc kubenswrapper[4820]: I0221 06:47:25.728512 4820 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 21 06:47:25 crc kubenswrapper[4820]: E0221 06:47:25.772955 4820 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 21 06:47:25 crc kubenswrapper[4820]: I0221 06:47:25.860457 4820 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.004682 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.007763 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.418037 4820 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.620384 4820 apiserver.go:52] "Watching apiserver" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.623727 4820 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.624625 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.625302 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.625920 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.626041 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.626290 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.626357 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.626473 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.626525 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.626612 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.626763 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.627813 4820 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.630952 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.631308 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.632433 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.632497 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.632630 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.632640 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.632951 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.633075 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.633291 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.636877 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:26:29.996853087 +0000 UTC Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643043 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643090 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643118 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643147 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643177 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643256 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643287 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643319 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643346 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643374 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643403 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643435 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643478 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643508 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643537 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643568 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643569 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643624 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643623 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643656 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643655 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643687 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643718 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643753 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643784 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643805 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643827 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643855 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643875 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643899 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643923 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643983 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644007 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644028 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644048 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644075 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644094 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644115 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644136 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644158 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644177 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644198 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644220 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644271 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644301 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644325 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644345 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644364 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644384 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644405 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644428 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644460 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644491 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644610 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644639 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644805 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644916 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644948 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644969 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644991 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645014 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645037 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645059 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645081 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645223 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645282 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645316 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645352 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645385 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645419 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645450 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645481 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645509 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645701 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645742 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643702 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643917 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644280 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644320 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645825 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644348 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644430 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644546 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644564 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644589 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644594 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644614 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644698 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644881 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644951 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645027 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645232 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645293 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645294 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645444 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645960 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646064 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646100 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646177 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646205 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646256 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645777 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646376 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646415 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646448 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646454 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646476 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646491 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646511 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646540 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646581 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646610 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646625 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646641 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646664 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646782 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.647045 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.647068 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.647145 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.647428 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.647561 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:27.147537333 +0000 UTC m=+22.180621641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645972 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645725 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645762 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645714 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.648904 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.649174 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.649742 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.649815 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.649973 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.650180 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.650389 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654450 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646676 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654803 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654824 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654920 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654931 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654957 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654961 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654982 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655007 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655030 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655056 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655081 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655105 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655128 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655149 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655173 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655195 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655218 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655265 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655287 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655298 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655309 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655389 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655429 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655465 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655564 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655647 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.656442 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.656975 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657045 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657085 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657124 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657169 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657204 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657213 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657269 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657306 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657340 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657376 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657413 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657457 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657503 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657748 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.658193 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.658512 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.658755 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.658867 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.659886 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.659932 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.659968 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660001 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660037 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660069 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660100 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660133 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660168 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660200 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660233 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660337 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660556 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660593 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660626 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660662 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660699 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660778 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660813 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660845 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660875 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660910 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660942 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660974 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661005 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661042 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661092 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661123 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661153 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661187 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661221 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661281 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661316 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661352 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661388 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661418 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661452 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661486 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661519 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661551 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661584 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661616 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661650 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661680 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661714 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661746 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661780 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661811 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661842 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661878 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661911 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661946 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661981 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662015 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662048 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662084 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662119 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662153 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662187 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662219 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662275 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662310 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662344 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662376 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662410 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662446 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662480 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662514 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662548 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662581 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662615 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.663978 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664018 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664052 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664119 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664178 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664216 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664287 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664326 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664406 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664451 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664485 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664519 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664558 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664592 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664627 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664663 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664696 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664991 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665023 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665216 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665274 4820 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665294 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665316 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665335 4820 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665355 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665375 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665393 4820 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665414 4820 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665434 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665454 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665473 4820 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665493 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665511 4820 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665532 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665551 4820 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665570 4820 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665592 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665610 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665629 4820 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665649 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665667 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665687 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665705 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665724 4820 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665743 4820 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665764 4820 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665784 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665804 4820 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665824 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665842 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665860 4820 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665879 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665897 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665916 4820 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666043 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666064 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666302 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666326 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666346 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666365 4820 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666384 4820 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666403 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666695 4820 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666719 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666738 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666757 4820 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666777 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666796 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666818 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666838 4820 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666860 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666879 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666899 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666917 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666938 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666957 4820 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666975 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.671143 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661187 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661469 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662329 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662509 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662733 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662799 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662871 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.663088 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.663465 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664385 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664420 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664454 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664795 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664804 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664929 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665135 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665279 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666859 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.667402 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.667429 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.667465 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.667568 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.668288 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.668368 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669036 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669207 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669262 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669337 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669409 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669594 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669559 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669634 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669921 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669946 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670187 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670270 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669113 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670302 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670380 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670587 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670735 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670956 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670110 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.671132 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.671352 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.671435 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.671482 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.671522 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.672305 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.672358 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.672433 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.672552 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.672728 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.672999 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673055 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673154 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673127 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673196 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673324 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673597 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673641 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673738 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673780 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673914 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673916 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.674063 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.674201 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.674386 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.674402 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.674704 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.674887 4820 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.675013 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.675098 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.675123 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.675331 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:27.175301671 +0000 UTC m=+22.208385909 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.675372 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.675919 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.676046 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:27.176033431 +0000 UTC m=+22.209117859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.676053 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.676819 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.687364 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.687503 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.689834 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-21 06:42:25 +0000 UTC, rotation deadline is 2026-12-24 17:02:21.362591598 +0000 UTC Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.689859 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7354h14m54.672735576s for next certificate rotation Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.690016 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.690035 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.690050 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.690050 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:19Z\\\",\\\"message\\\":\\\"W0221 06:47:08.762585 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0221 06:47:08.762958 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771656428 cert, and key in /tmp/serving-cert-2158094551/serving-signer.crt, /tmp/serving-cert-2158094551/serving-signer.key\\\\nI0221 06:47:09.074439 1 observer_polling.go:159] Starting file observer\\\\nW0221 06:47:09.077196 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0221 06:47:09.077311 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:09.078843 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2158094551/tls.crt::/tmp/serving-cert-2158094551/tls.key\\\\\\\"\\\\nF0221 06:47:19.635416 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.690310 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:27.190271395 +0000 UTC m=+22.223355593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.690359 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.690648 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.692533 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.693697 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.693787 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.694097 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.694267 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.694389 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.694489 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.694559 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.694687 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:27.194672046 +0000 UTC m=+22.227756244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.694408 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.694406 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.694353 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.694952 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.694774 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695120 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695122 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695183 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695222 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695234 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695290 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695506 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695523 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695525 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695840 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.696040 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.696090 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695536 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695579 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695643 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.697355 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.698665 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.698888 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.698896 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.698961 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.699471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.700047 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.700063 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.700093 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.700600 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.701036 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.701485 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.701975 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.704176 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.704200 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.705446 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.706577 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.707342 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.708083 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.708346 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.709217 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.708944 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.709818 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.709812 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.709919 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.710193 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.710721 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.710839 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.711328 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.711978 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.712471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.712604 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.721468 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.721701 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.721761 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.725684 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.741915 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.743049 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.745480 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.746633 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.746749 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.753023 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.761689 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768353 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768537 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768597 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768613 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768629 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768632 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768643 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768679 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768690 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768699 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768699 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768707 4820 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768718 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768727 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768745 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768754 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768763 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768771 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768780 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768788 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768797 4820 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768806 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768814 4820 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768822 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768834 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768846 4820 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768862 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768874 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768884 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768895 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768905 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768914 4820 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768922 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768930 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768938 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768946 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768955 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768964 4820 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768972 4820 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768980 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768988 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768996 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769004 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769012 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769020 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769028 4820 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769039 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769049 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769060 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769070 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769081 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769091 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769101 4820 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769113 4820 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769125 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769138 4820 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769149 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769162 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769174 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769187 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769198 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769210 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769222 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769255 4820 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769268 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769279 4820 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769292 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769303 4820 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769313 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769324 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769336 4820 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769348 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769362 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769374 4820 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769385 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769399 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769410 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769422 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769437 4820 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769448 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769461 4820 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769472 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769483 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769495 4820 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769506 4820 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769516 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769527 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769538 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769549 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769559 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769570 4820 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769578 4820 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769587 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769594 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769602 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769612 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769623 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769634 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769645 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769656 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769666 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769676 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769686 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769699 4820 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769710 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769721 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769732 4820 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769745 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769758 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769769 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769780 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769793 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769805 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769816 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769828 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769840 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769854 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769869 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769881 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769892 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769903 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769912 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769920 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769928 4820 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769937 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769947 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769955 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769964 4820 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769973 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769981 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769990 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769999 4820 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.770008 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.770016 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.770024 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.770032 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.770040 4820 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.770049 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.770978 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.816444 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.817147 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.822756 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff" exitCode=255 Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.822798 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff"} Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.822843 4820 scope.go:117] "RemoveContainer" containerID="3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.832848 4820 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.833121 4820 scope.go:117] "RemoveContainer" containerID="97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.833428 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.838953 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:19Z\\\",\\\"message\\\":\\\"W0221 06:47:08.762585 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0221 06:47:08.762958 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771656428 cert, and key in /tmp/serving-cert-2158094551/serving-signer.crt, /tmp/serving-cert-2158094551/serving-signer.key\\\\nI0221 06:47:09.074439 1 observer_polling.go:159] Starting file observer\\\\nW0221 06:47:09.077196 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0221 06:47:09.077311 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:09.078843 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2158094551/tls.crt::/tmp/serving-cert-2158094551/tls.key\\\\\\\"\\\\nF0221 06:47:19.635416 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.849181 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.857725 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.867781 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.878435 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.887719 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.895604 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.959606 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.980326 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: W0221 06:47:26.992790 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-383a3383e4ac18d58791ca4b1c85320a91a1de8d9c4b57bafc3d7cd8f8fd3507 WatchSource:0}: Error finding container 383a3383e4ac18d58791ca4b1c85320a91a1de8d9c4b57bafc3d7cd8f8fd3507: Status 404 returned error can't find the container with id 383a3383e4ac18d58791ca4b1c85320a91a1de8d9c4b57bafc3d7cd8f8fd3507 Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.022268 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.022535 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tv4k8"] Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.022905 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.025565 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.027266 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.029932 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.039490 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:19Z\\\",\\\"message\\\":\\\"W0221 06:47:08.762585 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0221 06:47:08.762958 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771656428 cert, and key in /tmp/serving-cert-2158094551/serving-signer.crt, /tmp/serving-cert-2158094551/serving-signer.key\\\\nI0221 06:47:09.074439 1 observer_polling.go:159] Starting file observer\\\\nW0221 06:47:09.077196 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0221 06:47:09.077311 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:09.078843 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2158094551/tls.crt::/tmp/serving-cert-2158094551/tls.key\\\\\\\"\\\\nF0221 06:47:19.635416 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.048778 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.057918 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: W0221 06:47:27.061680 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-83938517d3f8a4ee7cfca71119836bdafde5e357cfa2c200e74042c9ba01fd69 WatchSource:0}: Error finding container 83938517d3f8a4ee7cfca71119836bdafde5e357cfa2c200e74042c9ba01fd69: Status 404 returned error can't find the container with id 83938517d3f8a4ee7cfca71119836bdafde5e357cfa2c200e74042c9ba01fd69 Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.067066 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.072132 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhl67\" (UniqueName: \"kubernetes.io/projected/80b29fd0-922f-41c6-8ff4-dfa111ff89ad-kube-api-access-fhl67\") pod \"node-resolver-tv4k8\" (UID: \"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\") " pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.072189 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/80b29fd0-922f-41c6-8ff4-dfa111ff89ad-hosts-file\") pod \"node-resolver-tv4k8\" (UID: \"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\") " pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.074990 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.083483 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.091804 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.099091 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.172986 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.173106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/80b29fd0-922f-41c6-8ff4-dfa111ff89ad-hosts-file\") pod \"node-resolver-tv4k8\" (UID: \"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\") " pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.173161 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:28.173138572 +0000 UTC m=+23.206222770 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.173229 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/80b29fd0-922f-41c6-8ff4-dfa111ff89ad-hosts-file\") pod \"node-resolver-tv4k8\" (UID: \"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\") " pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.173272 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhl67\" (UniqueName: \"kubernetes.io/projected/80b29fd0-922f-41c6-8ff4-dfa111ff89ad-kube-api-access-fhl67\") pod \"node-resolver-tv4k8\" (UID: \"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\") " pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.190362 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhl67\" (UniqueName: \"kubernetes.io/projected/80b29fd0-922f-41c6-8ff4-dfa111ff89ad-kube-api-access-fhl67\") pod \"node-resolver-tv4k8\" (UID: \"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\") " pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.273825 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.273862 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.273900 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.273916 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.273994 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274047 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:28.274034881 +0000 UTC m=+23.307119079 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274058 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274094 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274112 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274127 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274139 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274150 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274198 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:28.274182655 +0000 UTC m=+23.307266853 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274099 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274272 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:28.274210606 +0000 UTC m=+23.307294834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274350 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:28.274333609 +0000 UTC m=+23.307417847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.346218 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: W0221 06:47:27.359467 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80b29fd0_922f_41c6_8ff4_dfa111ff89ad.slice/crio-1a09dfd8073022b97a60bf77860a5997b00d7e255e0a376af9d713480cbcd0af WatchSource:0}: Error finding container 1a09dfd8073022b97a60bf77860a5997b00d7e255e0a376af9d713480cbcd0af: Status 404 returned error can't find the container with id 1a09dfd8073022b97a60bf77860a5997b00d7e255e0a376af9d713480cbcd0af Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.463368 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xpb8z"] Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.464029 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qth8z"] Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.464210 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.464361 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.467792 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-94gxr"] Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.468084 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486112 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486225 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486310 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486225 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486640 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486651 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486728 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486733 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486757 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486768 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486844 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.489692 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.497153 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.508209 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.520544 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.532324 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.541200 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.549074 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.556219 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575140 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:19Z\\\",\\\"message\\\":\\\"W0221 06:47:08.762585 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0221 06:47:08.762958 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771656428 cert, and key in /tmp/serving-cert-2158094551/serving-signer.crt, /tmp/serving-cert-2158094551/serving-signer.key\\\\nI0221 06:47:09.074439 1 observer_polling.go:159] Starting file observer\\\\nW0221 06:47:09.077196 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0221 06:47:09.077311 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:09.078843 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2158094551/tls.crt::/tmp/serving-cert-2158094551/tls.key\\\\\\\"\\\\nF0221 06:47:19.635416 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575524 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654bx\" (UniqueName: \"kubernetes.io/projected/086516d1-6ffd-4d1f-b222-898336aa9960-kube-api-access-654bx\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575545 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-conf-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575563 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-cnibin\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575579 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-netns\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575594 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-daemon-config\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575610 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-system-cni-dir\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575626 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-cnibin\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575641 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-etc-kubernetes\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575657 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-rootfs\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575670 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-system-cni-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575685 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-socket-dir-parent\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575699 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56bf7\" (UniqueName: \"kubernetes.io/projected/abdb469c-ba72-4790-9ce3-785f4facbcb9-kube-api-access-56bf7\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575714 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-proxy-tls\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575728 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/086516d1-6ffd-4d1f-b222-898336aa9960-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575743 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-cni-multus\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575780 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/086516d1-6ffd-4d1f-b222-898336aa9960-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575794 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-os-release\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575807 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-cni-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575867 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-os-release\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575883 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-multus-certs\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575972 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-cni-bin\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.576010 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-kubelet\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.576028 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-mcd-auth-proxy-config\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.576047 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzcs9\" (UniqueName: \"kubernetes.io/projected/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-kube-api-access-hzcs9\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.576066 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-k8s-cni-cncf-io\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.576080 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/abdb469c-ba72-4790-9ce3-785f4facbcb9-cni-binary-copy\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.576125 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-hostroot\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.584253 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.595649 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.605577 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.613075 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.621448 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.629304 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.635495 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.637217 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:58:22.827471178 +0000 UTC Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.643158 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.650825 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.658929 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.667792 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676540 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676739 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-rootfs\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676781 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-system-cni-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676809 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-socket-dir-parent\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676832 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-rootfs\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676836 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56bf7\" (UniqueName: \"kubernetes.io/projected/abdb469c-ba72-4790-9ce3-785f4facbcb9-kube-api-access-56bf7\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676890 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/086516d1-6ffd-4d1f-b222-898336aa9960-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676891 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-system-cni-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676910 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-cni-multus\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676937 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-proxy-tls\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676961 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-cni-multus\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676971 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/086516d1-6ffd-4d1f-b222-898336aa9960-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-os-release\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677009 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-socket-dir-parent\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677024 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677045 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-cni-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677063 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-os-release\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677082 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-multus-certs\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677115 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-cni-bin\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677161 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-kubelet\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677186 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-mcd-auth-proxy-config\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677224 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzcs9\" (UniqueName: \"kubernetes.io/projected/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-kube-api-access-hzcs9\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677305 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-cni-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677310 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-k8s-cni-cncf-io\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677337 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/abdb469c-ba72-4790-9ce3-785f4facbcb9-cni-binary-copy\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677355 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-hostroot\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677367 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-os-release\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677373 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-654bx\" (UniqueName: \"kubernetes.io/projected/086516d1-6ffd-4d1f-b222-898336aa9960-kube-api-access-654bx\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677411 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-conf-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677429 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-cnibin\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677445 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-netns\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677489 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-daemon-config\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677509 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-cnibin\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-etc-kubernetes\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677540 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-system-cni-dir\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-system-cni-dir\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677651 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-hostroot\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677662 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/086516d1-6ffd-4d1f-b222-898336aa9960-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677733 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-kubelet\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677782 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-os-release\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-multus-certs\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677834 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-cni-bin\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677865 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-conf-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677901 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-cnibin\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677929 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-etc-kubernetes\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.678049 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-cnibin\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.678078 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-k8s-cni-cncf-io\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.678111 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-netns\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.678151 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.690458 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:19Z\\\",\\\"message\\\":\\\"W0221 06:47:08.762585 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0221 06:47:08.762958 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771656428 cert, and key in /tmp/serving-cert-2158094551/serving-signer.crt, /tmp/serving-cert-2158094551/serving-signer.key\\\\nI0221 06:47:09.074439 1 observer_polling.go:159] Starting file observer\\\\nW0221 06:47:09.077196 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0221 06:47:09.077311 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:09.078843 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2158094551/tls.crt::/tmp/serving-cert-2158094551/tls.key\\\\\\\"\\\\nF0221 06:47:19.635416 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.698114 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.698260 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.700905 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-proxy-tls\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.701041 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-mcd-auth-proxy-config\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.701401 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/abdb469c-ba72-4790-9ce3-785f4facbcb9-cni-binary-copy\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.701725 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-daemon-config\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.702768 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56bf7\" (UniqueName: \"kubernetes.io/projected/abdb469c-ba72-4790-9ce3-785f4facbcb9-kube-api-access-56bf7\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.704167 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/086516d1-6ffd-4d1f-b222-898336aa9960-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.704424 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.705104 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.706180 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzcs9\" (UniqueName: \"kubernetes.io/projected/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-kube-api-access-hzcs9\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.706411 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.706639 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-654bx\" (UniqueName: \"kubernetes.io/projected/086516d1-6ffd-4d1f-b222-898336aa9960-kube-api-access-654bx\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.707003 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.708005 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.708499 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.709040 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.709925 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.710516 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.711400 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.711954 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.713036 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.713551 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.714045 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.714929 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.715472 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.716385 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.716770 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.717362 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.718280 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.718711 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.719817 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.720275 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.721328 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.721770 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.722364 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.723438 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.723926 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.724799 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.725307 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.726124 4820 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.726221 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.727803 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.728709 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.729097 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.730484 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.731094 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.732047 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.732715 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.733770 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.734317 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.735387 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.735975 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.736912 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.737732 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.738538 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.739393 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.740354 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.742085 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.742688 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.743202 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.744339 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.744990 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.745949 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.783104 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.793284 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: W0221 06:47:27.807090 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce38546e_524f_4801_8ee1_b4bb9d6c6dff.slice/crio-f6a9924100d73e4f6951786a3f88e371d993d13bb4b321ad414de4c0a6c13cc3 WatchSource:0}: Error finding container f6a9924100d73e4f6951786a3f88e371d993d13bb4b321ad414de4c0a6c13cc3: Status 404 returned error can't find the container with id f6a9924100d73e4f6951786a3f88e371d993d13bb4b321ad414de4c0a6c13cc3 Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.812772 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.826044 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.833005 4820 scope.go:117] "RemoveContainer" containerID="97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff" Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.833142 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.835112 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"f6a9924100d73e4f6951786a3f88e371d993d13bb4b321ad414de4c0a6c13cc3"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.836026 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bvfjp"] Feb 21 06:47:27 crc kubenswrapper[4820]: W0221 06:47:27.836303 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabdb469c_ba72_4790_9ce3_785f4facbcb9.slice/crio-6a01634402318ce17912e5065fb692381aafefea9a0f268b2bfb05a6d5931f39 WatchSource:0}: Error finding container 6a01634402318ce17912e5065fb692381aafefea9a0f268b2bfb05a6d5931f39: Status 404 returned error can't find the container with id 6a01634402318ce17912e5065fb692381aafefea9a0f268b2bfb05a6d5931f39 Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.836727 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.836984 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tv4k8" event={"ID":"80b29fd0-922f-41c6-8ff4-dfa111ff89ad","Type":"ContainerStarted","Data":"0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.837026 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tv4k8" event={"ID":"80b29fd0-922f-41c6-8ff4-dfa111ff89ad","Type":"ContainerStarted","Data":"1a09dfd8073022b97a60bf77860a5997b00d7e255e0a376af9d713480cbcd0af"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.838425 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.838572 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.838818 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.838859 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.838912 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.840280 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.840286 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.841139 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"83938517d3f8a4ee7cfca71119836bdafde5e357cfa2c200e74042c9ba01fd69"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.843689 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.846747 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.846792 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.846804 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"383a3383e4ac18d58791ca4b1c85320a91a1de8d9c4b57bafc3d7cd8f8fd3507"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.848183 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.848223 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"72f690b7f8a99b99dc23e54a5cfdf8fe886c8872cc1a26dea16211d9cfdf1eb5"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.849147 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerStarted","Data":"fe06c43f986174ea48f11afd7404f82f08745440e74b79dc022d8bf8b69f92a8"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.854801 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.865323 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.877192 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879264 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879300 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879358 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-netns\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879381 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-kubelet\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879395 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-systemd-units\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879433 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-var-lib-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879453 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-slash\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879468 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-etc-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879505 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879527 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-log-socket\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879545 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-config\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879581 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-systemd\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879596 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-bin\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879613 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-node-log\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879666 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-ovn\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879703 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-netd\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879721 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-script-lib\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879738 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wgvx\" (UniqueName: \"kubernetes.io/projected/a70ec449-ba11-47dd-a60c-f77993670045-kube-api-access-2wgvx\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879755 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a70ec449-ba11-47dd-a60c-f77993670045-ovn-node-metrics-cert\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-env-overrides\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.888502 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.896458 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.905944 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.917590 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.928027 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.972700 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.980882 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-slash\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.980913 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-etc-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.980931 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.980948 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-log-socket\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981007 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-config\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981014 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-etc-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981021 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-systemd\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981054 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981091 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-bin\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981126 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-slash\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981133 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-node-log\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981156 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-script-lib\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981163 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-bin\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981178 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-ovn\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981229 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-netd\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981265 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a70ec449-ba11-47dd-a60c-f77993670045-ovn-node-metrics-cert\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981283 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wgvx\" (UniqueName: \"kubernetes.io/projected/a70ec449-ba11-47dd-a60c-f77993670045-kube-api-access-2wgvx\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981299 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-env-overrides\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981347 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981369 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981387 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-netns\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981422 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-systemd-units\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981446 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-var-lib-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981499 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-kubelet\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981561 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-kubelet\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982004 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-log-socket\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982009 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-ovn\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982080 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-node-log\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982217 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982278 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982307 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982328 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-netns\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982335 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-env-overrides\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982348 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-systemd-units\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982368 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-var-lib-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982385 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-netd\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981059 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-systemd\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.983331 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-config\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.984062 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-script-lib\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.986901 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a70ec449-ba11-47dd-a60c-f77993670045-ovn-node-metrics-cert\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.996446 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.997706 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wgvx\" (UniqueName: \"kubernetes.io/projected/a70ec449-ba11-47dd-a60c-f77993670045-kube-api-access-2wgvx\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.016836 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.027412 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.040382 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.052146 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.062919 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.074516 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.084424 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.095230 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.107163 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.130562 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.161815 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.171975 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: W0221 06:47:28.172349 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda70ec449_ba11_47dd_a60c_f77993670045.slice/crio-118b64efb54199ff43507f06d1575b956885db91aab695f62818a8cb0302061c WatchSource:0}: Error finding container 118b64efb54199ff43507f06d1575b956885db91aab695f62818a8cb0302061c: Status 404 returned error can't find the container with id 118b64efb54199ff43507f06d1575b956885db91aab695f62818a8cb0302061c Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.183070 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.183262 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:30.183222222 +0000 UTC m=+25.216306430 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.284391 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.284433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.284455 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.284472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284576 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284566 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284619 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:30.284607474 +0000 UTC m=+25.317691672 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284650 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:30.284630914 +0000 UTC m=+25.317715132 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284700 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284736 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284743 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284752 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284763 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284766 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284802 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:30.28479281 +0000 UTC m=+25.317877018 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284837 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:30.2848141 +0000 UTC m=+25.317898338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.638218 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:46:54.049177276 +0000 UTC Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.695835 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.695924 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.695983 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.696104 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.853762 4820 generic.go:334] "Generic (PLEG): container finished" podID="086516d1-6ffd-4d1f-b222-898336aa9960" containerID="edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297" exitCode=0 Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.854049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerDied","Data":"edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297"} Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.855996 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerStarted","Data":"27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a"} Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.856033 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerStarted","Data":"6a01634402318ce17912e5065fb692381aafefea9a0f268b2bfb05a6d5931f39"} Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.857273 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70"} Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.857323 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb"} Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.860047 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c" exitCode=0 Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.860081 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.860102 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"118b64efb54199ff43507f06d1575b956885db91aab695f62818a8cb0302061c"} Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.873827 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.897059 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.105862 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.136441 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.166555 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.188474 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.200740 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.213190 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.234125 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.265211 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.300901 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.318366 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.330809 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.341931 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.355818 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.358801 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.362860 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.366115 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.368571 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.380985 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.395504 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.412552 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.427589 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.439603 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.450781 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.467017 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.478032 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.488967 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.500901 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.510318 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.524465 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.538740 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.553678 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.567181 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.584003 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.597492 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.614018 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.630725 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.638925 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:04:24.29901921 +0000 UTC Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.646547 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.678684 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.696109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:29 crc kubenswrapper[4820]: E0221 06:47:29.696228 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.865487 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.865850 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.865865 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.865876 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.865886 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.865895 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.867206 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.869135 4820 generic.go:334] "Generic (PLEG): container finished" podID="086516d1-6ffd-4d1f-b222-898336aa9960" containerID="304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f" exitCode=0 Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.869624 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerDied","Data":"304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.882969 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.900566 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.916677 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.929937 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.941393 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.964860 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.992023 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.020623 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.032186 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.038407 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-t5qxz"] Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.038820 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.053939 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.054431 4820 scope.go:117] "RemoveContainer" containerID="97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff" Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.054588 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.062513 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.082986 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.102273 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.118324 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c8767767-a460-416a-b2c2-82a8d9eebb1e-serviceca\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.118354 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8767767-a460-416a-b2c2-82a8d9eebb1e-host\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.118433 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlct7\" (UniqueName: \"kubernetes.io/projected/c8767767-a460-416a-b2c2-82a8d9eebb1e-kube-api-access-nlct7\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.122461 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.151613 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.191367 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.219653 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.219775 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlct7\" (UniqueName: \"kubernetes.io/projected/c8767767-a460-416a-b2c2-82a8d9eebb1e-kube-api-access-nlct7\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.219848 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:34.219808346 +0000 UTC m=+29.252892584 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.220020 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c8767767-a460-416a-b2c2-82a8d9eebb1e-serviceca\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.220090 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8767767-a460-416a-b2c2-82a8d9eebb1e-host\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.220201 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8767767-a460-416a-b2c2-82a8d9eebb1e-host\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.221084 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c8767767-a460-416a-b2c2-82a8d9eebb1e-serviceca\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.241842 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.257224 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlct7\" (UniqueName: \"kubernetes.io/projected/c8767767-a460-416a-b2c2-82a8d9eebb1e-kube-api-access-nlct7\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.289608 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.321057 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.321097 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.321120 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.321143 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321210 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321234 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321267 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321275 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321290 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321298 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:34.321280141 +0000 UTC m=+29.354364359 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321321 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:34.321306331 +0000 UTC m=+29.354390529 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321341 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:34.321331802 +0000 UTC m=+29.354416130 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321430 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321484 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321509 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321602 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:34.321572649 +0000 UTC m=+29.354656897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.333086 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.350218 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: W0221 06:47:30.364433 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8767767_a460_416a_b2c2_82a8d9eebb1e.slice/crio-65cac19bdb72b02141f171b0435a2e6dc14680c460a151e48e555d057a7bf1f5 WatchSource:0}: Error finding container 65cac19bdb72b02141f171b0435a2e6dc14680c460a151e48e555d057a7bf1f5: Status 404 returned error can't find the container with id 65cac19bdb72b02141f171b0435a2e6dc14680c460a151e48e555d057a7bf1f5 Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.375003 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.414391 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.449996 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.497402 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.544498 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.570383 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.611000 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.639719 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:39:25.193539505 +0000 UTC Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.654906 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.693893 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.696207 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.696280 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.696429 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.696509 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.731948 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.773449 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.811224 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.850234 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.872901 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t5qxz" event={"ID":"c8767767-a460-416a-b2c2-82a8d9eebb1e","Type":"ContainerStarted","Data":"c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80"} Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.872950 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t5qxz" event={"ID":"c8767767-a460-416a-b2c2-82a8d9eebb1e","Type":"ContainerStarted","Data":"65cac19bdb72b02141f171b0435a2e6dc14680c460a151e48e555d057a7bf1f5"} Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.875554 4820 generic.go:334] "Generic (PLEG): container finished" podID="086516d1-6ffd-4d1f-b222-898336aa9960" containerID="c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b" exitCode=0 Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.875631 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerDied","Data":"c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b"} Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.890399 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.935917 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.971709 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.012330 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.050969 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.091074 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.128582 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.170030 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.214756 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.249515 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.291994 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.333685 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.374818 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.415277 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.454180 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.492149 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.531648 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.572802 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.614475 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.640592 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:15:13.405520813 +0000 UTC Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.669480 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.690869 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.696211 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:31 crc kubenswrapper[4820]: E0221 06:47:31.696442 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.737050 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.777231 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.812033 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.857732 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.880518 4820 generic.go:334] "Generic (PLEG): container finished" podID="086516d1-6ffd-4d1f-b222-898336aa9960" containerID="73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88" exitCode=0 Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.880600 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerDied","Data":"73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88"} Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.895405 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.932093 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.968913 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.010761 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.038011 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.040766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.040852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.040869 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.041682 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.055138 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.103870 4820 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.104130 4820 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.105077 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.105102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.105111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.105124 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.105133 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.117859 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.121339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.121381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.121415 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.121440 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.121451 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.128830 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.135928 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.139459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.139540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.139558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.139578 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.139594 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.155958 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.160442 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.160478 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.160489 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.160505 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.160514 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.175186 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.177192 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.181032 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.181168 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.181294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.181408 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.181497 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.194625 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.195139 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.197071 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.197190 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.197300 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.197393 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.197470 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.213289 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.254558 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.289719 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.298990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.299015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.299024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.299037 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.299045 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.329626 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.370469 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.402029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.402082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.402096 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.402115 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.402128 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.410068 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.457100 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.492327 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.504878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.504926 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.504944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.504967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.504987 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.536364 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.571438 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.607579 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.607614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.607625 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.607642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.607654 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.641706 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 05:00:09.941735033 +0000 UTC Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.696347 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.696383 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.696471 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.702986 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.713117 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.713179 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.713199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.713225 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.713269 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.816727 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.816796 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.816815 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.816846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.816870 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.888895 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.892972 4820 generic.go:334] "Generic (PLEG): container finished" podID="086516d1-6ffd-4d1f-b222-898336aa9960" containerID="9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f" exitCode=0 Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.893020 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerDied","Data":"9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.905909 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.918267 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.920287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.920312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.920322 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.920335 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.920344 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.929804 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.942611 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.953190 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.969282 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.980249 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.994705 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.009576 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.017907 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.024752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.024782 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.024789 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.024802 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.024811 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.029847 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.053434 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.093539 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.128428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.128469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.128479 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.128496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.128508 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.132487 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.230528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.230564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.230575 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.230653 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.230670 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.333185 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.333254 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.333269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.333285 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.333297 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.436738 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.436777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.436785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.436798 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.436807 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.539702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.539755 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.539771 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.539790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.539805 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.641937 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:06:10.735137349 +0000 UTC Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.643353 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.643390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.643399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.643412 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.643422 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.696343 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:33 crc kubenswrapper[4820]: E0221 06:47:33.696532 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.746732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.746774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.746785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.746804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.746819 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.849178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.849224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.849263 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.849324 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.849340 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.900388 4820 generic.go:334] "Generic (PLEG): container finished" podID="086516d1-6ffd-4d1f-b222-898336aa9960" containerID="c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef" exitCode=0 Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.900440 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerDied","Data":"c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.925608 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.946624 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.956604 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.956668 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.956688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.956712 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.956733 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.960806 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.971805 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.982380 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.993631 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.006444 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.017073 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.027885 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.038665 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.049926 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.058810 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.058858 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.058872 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.058891 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.058905 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.066873 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.083924 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.102006 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.161062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.161106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.161115 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.161127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.161137 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.258943 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.259122 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:42.259098377 +0000 UTC m=+37.292182575 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.263496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.263521 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.263531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.263545 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.263554 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.360294 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.360330 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.360352 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.360369 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360480 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360503 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360515 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360555 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:42.360542732 +0000 UTC m=+37.393626920 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360480 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360588 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360634 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360685 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360695 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:42.360673485 +0000 UTC m=+37.393757773 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360705 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360720 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:42.360712206 +0000 UTC m=+37.393796524 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360797 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:42.360770588 +0000 UTC m=+37.393854806 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.365976 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.366014 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.366026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.366043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.366056 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.467678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.467717 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.467729 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.467749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.467762 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.570426 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.570516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.570540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.570571 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.570592 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.642672 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:28:29.382271658 +0000 UTC Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.672732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.672767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.672776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.672790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.672799 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.696056 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.696095 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.696150 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.696267 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.775341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.775402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.775418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.775441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.775457 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.878213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.878314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.878333 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.878359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.878376 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.908573 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerStarted","Data":"17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.913557 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.913912 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.914108 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.925168 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.943165 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.944127 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.950706 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.961407 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.972685 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.980978 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.981023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.981038 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.981059 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.981072 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.984371 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.997319 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.008451 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.020103 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.030358 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.040385 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.054960 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.068024 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.083214 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.084030 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.084171 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.084195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.084217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.084249 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.096811 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.106481 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.116534 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.127180 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.138536 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.149348 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.159033 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.168426 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.178516 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.186724 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.186770 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.186784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.186804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.186820 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.191492 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.202459 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.219302 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.231390 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.242543 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.258317 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.289391 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.289421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.289430 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.289442 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.289451 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.391849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.391886 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.391897 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.391915 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.391926 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.463300 4820 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.494035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.494092 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.494102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.494116 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.494137 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.596273 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.596359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.596380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.596400 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.596411 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.643470 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:22:42.855842123 +0000 UTC Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.696176 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:35 crc kubenswrapper[4820]: E0221 06:47:35.696336 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.698149 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.698178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.698271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.698288 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.698299 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.708633 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.720164 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.732617 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.743027 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.752142 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.762287 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.776815 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.788818 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.799969 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.800516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.800557 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.800572 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.800593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.800609 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.811746 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.824345 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.842133 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.868999 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.883951 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.902313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.902378 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.902388 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.902402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.902411 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.915518 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.005160 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.005280 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.005311 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.005339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.005357 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.107549 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.107612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.107627 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.107645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.107658 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.209729 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.209778 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.209795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.209817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.209831 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.312751 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.312792 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.312803 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.312821 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.312834 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.415070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.415127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.415141 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.415159 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.415171 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.517505 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.517552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.517563 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.517580 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.517594 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.620014 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.620097 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.620111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.620129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.620141 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.644278 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:42:45.785721922 +0000 UTC Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.695783 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:36 crc kubenswrapper[4820]: E0221 06:47:36.695898 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.695789 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:36 crc kubenswrapper[4820]: E0221 06:47:36.695958 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.722308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.722349 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.722359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.722373 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.722384 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.825108 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.825156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.825166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.825182 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.825192 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.919823 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/0.log" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.922917 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd" exitCode=1 Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.922948 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.924072 4820 scope.go:117] "RemoveContainer" containerID="01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.927156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.927173 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.927180 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.927191 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.927200 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.951870 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:36Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.964346 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:36Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.973957 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:36Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.983411 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:36Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.996035 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:36Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.008561 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.020611 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.030566 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.030602 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.030612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.030631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.030641 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.032102 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.047928 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.065376 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.085184 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.100497 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.120736 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:36Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 06:47:36.845043 6158 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 06:47:36.845162 6158 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 06:47:36.845174 6158 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 06:47:36.845198 6158 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0221 06:47:36.845216 6158 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0221 06:47:36.845256 6158 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 06:47:36.845264 6158 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 06:47:36.845283 6158 factory.go:656] Stopping watch factory\\\\nI0221 06:47:36.845295 6158 ovnkube.go:599] Stopped ovnkube\\\\nI0221 06:47:36.845314 6158 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 06:47:36.845323 6158 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 06:47:36.845328 6158 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 06:47:36.845333 6158 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 06:47:36.845339 6158 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0221 06:47:36.845343 6158 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 06\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.131815 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.133373 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.133409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.133418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.133432 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.133443 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.236128 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.236168 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.236178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.236193 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.236202 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.338212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.338289 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.338309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.338330 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.338342 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.440259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.440326 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.440340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.440358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.440375 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.543259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.543299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.543308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.543323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.543334 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.644415 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:40:33.554525045 +0000 UTC Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.645656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.645686 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.645693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.645709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.645721 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.696348 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:37 crc kubenswrapper[4820]: E0221 06:47:37.696658 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.748012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.748042 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.748049 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.748060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.748068 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.850176 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.850265 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.850283 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.850306 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.850325 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.931932 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/1.log" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.932665 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/0.log" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.935691 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7" exitCode=1 Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.935789 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.935837 4820 scope.go:117] "RemoveContainer" containerID="01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.936940 4820 scope.go:117] "RemoveContainer" containerID="c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7" Feb 21 06:47:37 crc kubenswrapper[4820]: E0221 06:47:37.937178 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.952468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.952502 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.952514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.952529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.952540 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.957616 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.986631 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:36Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 06:47:36.845043 6158 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 06:47:36.845162 6158 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 06:47:36.845174 6158 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 06:47:36.845198 6158 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0221 06:47:36.845216 6158 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0221 06:47:36.845256 6158 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 06:47:36.845264 6158 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 06:47:36.845283 6158 factory.go:656] Stopping watch factory\\\\nI0221 06:47:36.845295 6158 ovnkube.go:599] Stopped ovnkube\\\\nI0221 06:47:36.845314 6158 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 06:47:36.845323 6158 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 06:47:36.845328 6158 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 06:47:36.845333 6158 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 06:47:36.845339 6158 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0221 06:47:36.845343 6158 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 06\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:37.999977 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.012156 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.023007 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.038563 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.055137 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.055990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.056060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.056077 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.056104 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.056120 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.070581 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.084030 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.096870 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.113290 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.134378 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.148148 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.158960 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.159029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.159041 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.159085 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.159098 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.166807 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.262305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.262347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.262358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.262374 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.262386 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.366427 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.366506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.366529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.366560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.366584 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.469826 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.469885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.469909 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.469941 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.470037 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.572830 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.572862 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.572870 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.572883 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.572891 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.644819 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:46:41.898892268 +0000 UTC Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.676270 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.676341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.676359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.676381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.676398 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.696099 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.696109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:38 crc kubenswrapper[4820]: E0221 06:47:38.696384 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:38 crc kubenswrapper[4820]: E0221 06:47:38.696546 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.778269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.778311 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.778326 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.778346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.778357 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.880354 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.880383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.880394 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.880408 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.880419 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.944545 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/1.log" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.950335 4820 scope.go:117] "RemoveContainer" containerID="c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7" Feb 21 06:47:38 crc kubenswrapper[4820]: E0221 06:47:38.950595 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.965397 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.979065 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.983332 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.983369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.983381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.983399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.983412 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.998068 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.014407 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.031883 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.051539 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.069226 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.086294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.086432 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.086513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.086607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.086693 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.090569 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.109223 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.126726 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.141384 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.154350 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.181143 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.188966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.189004 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.189015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.189035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.189049 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.193869 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.291450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.291486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.291494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.291507 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.291515 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.393813 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.393849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.393858 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.393874 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.393885 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.496019 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.496074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.496085 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.496100 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.496108 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.598552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.598625 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.598637 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.598679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.598693 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.645595 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:10:34.64466607 +0000 UTC Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.696134 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:39 crc kubenswrapper[4820]: E0221 06:47:39.696391 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.700372 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.700433 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.700443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.700456 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.700465 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.802613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.802648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.802656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.802685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.802696 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.904425 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.904466 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.904474 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.904489 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.904500 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.006901 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.006940 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.006949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.006961 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.006971 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.109602 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.109645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.109654 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.109668 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.109676 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.212459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.212505 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.212519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.212538 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.212550 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.315685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.315732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.315744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.315760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.315773 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.397809 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7"] Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.398293 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.401321 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.401722 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.417630 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.417680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.417695 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.417711 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.417722 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.419350 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.421787 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8646\" (UniqueName: \"kubernetes.io/projected/d837134d-9746-4fda-af7c-acf3077a61c7-kube-api-access-b8646\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.421955 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d837134d-9746-4fda-af7c-acf3077a61c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.422071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d837134d-9746-4fda-af7c-acf3077a61c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.422277 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d837134d-9746-4fda-af7c-acf3077a61c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.432602 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.443348 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.454209 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.467312 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.487875 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.506379 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.519932 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.520714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.520756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.520824 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.520844 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.520856 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.523019 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d837134d-9746-4fda-af7c-acf3077a61c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.523065 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d837134d-9746-4fda-af7c-acf3077a61c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.523105 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d837134d-9746-4fda-af7c-acf3077a61c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.523167 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8646\" (UniqueName: \"kubernetes.io/projected/d837134d-9746-4fda-af7c-acf3077a61c7-kube-api-access-b8646\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.524033 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d837134d-9746-4fda-af7c-acf3077a61c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.524142 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d837134d-9746-4fda-af7c-acf3077a61c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.537052 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.541002 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8646\" (UniqueName: \"kubernetes.io/projected/d837134d-9746-4fda-af7c-acf3077a61c7-kube-api-access-b8646\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.552184 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.570533 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.574316 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d837134d-9746-4fda-af7c-acf3077a61c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.583066 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.594835 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.610772 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.622774 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.622846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.622866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.622875 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.622890 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.622901 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.646336 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:37:30.057418656 +0000 UTC Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.696637 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.696661 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:40 crc kubenswrapper[4820]: E0221 06:47:40.696765 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:40 crc kubenswrapper[4820]: E0221 06:47:40.696830 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.716654 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.725398 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.725467 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.725497 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.725512 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.725522 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: W0221 06:47:40.730062 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd837134d_9746_4fda_af7c_acf3077a61c7.slice/crio-71fa951932e261142376c1fde3bed0730777151885ec85190f24629f64ad5d54 WatchSource:0}: Error finding container 71fa951932e261142376c1fde3bed0730777151885ec85190f24629f64ad5d54: Status 404 returned error can't find the container with id 71fa951932e261142376c1fde3bed0730777151885ec85190f24629f64ad5d54 Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.827468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.827504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.827512 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.827526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.827535 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.929885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.930519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.930544 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.930570 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.930586 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.958833 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" event={"ID":"d837134d-9746-4fda-af7c-acf3077a61c7","Type":"ContainerStarted","Data":"0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.958875 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" event={"ID":"d837134d-9746-4fda-af7c-acf3077a61c7","Type":"ContainerStarted","Data":"71fa951932e261142376c1fde3bed0730777151885ec85190f24629f64ad5d54"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.033083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.033137 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.033155 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.033178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.033195 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.135387 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.135441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.135455 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.135476 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.135494 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.238684 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.238732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.238743 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.238760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.238770 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.341656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.341704 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.341716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.341734 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.341748 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.444079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.444132 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.444144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.444163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.444175 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.511882 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bt6wj"] Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.512678 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:41 crc kubenswrapper[4820]: E0221 06:47:41.512799 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.532479 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.546900 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.546950 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.546965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.546936 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.546985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.547136 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.564552 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.580328 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.594164 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.618384 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.632263 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.632831 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.632923 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tf29\" (UniqueName: \"kubernetes.io/projected/a4537dd3-6e3b-481a-9f90-668020b5558b-kube-api-access-6tf29\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.647356 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 06:35:33.082698991 +0000 UTC Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.647576 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.649203 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.649260 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.649274 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.649291 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.649302 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.662965 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.673275 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.687049 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.695745 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:41 crc kubenswrapper[4820]: E0221 06:47:41.695866 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.703774 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.716809 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.731659 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.733784 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.733822 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tf29\" (UniqueName: \"kubernetes.io/projected/a4537dd3-6e3b-481a-9f90-668020b5558b-kube-api-access-6tf29\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:41 crc kubenswrapper[4820]: E0221 06:47:41.734082 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:41 crc kubenswrapper[4820]: E0221 06:47:41.734126 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:42.234114317 +0000 UTC m=+37.267198515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.747225 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.751125 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.751162 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.751174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.751193 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.751206 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.751911 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tf29\" (UniqueName: \"kubernetes.io/projected/a4537dd3-6e3b-481a-9f90-668020b5558b-kube-api-access-6tf29\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.759965 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.853635 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.853685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.853697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.853717 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.853731 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.955582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.955630 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.955642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.955659 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.955673 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.963297 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" event={"ID":"d837134d-9746-4fda-af7c-acf3077a61c7","Type":"ContainerStarted","Data":"82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.979772 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.000455 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.012320 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.024316 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.034508 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.046475 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.059268 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.059309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.059320 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.059338 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.059348 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.063005 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.077141 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.092299 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.103631 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.114108 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.123696 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.145011 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.158435 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.161080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.161139 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.161152 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.161169 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.161180 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.173038 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.184824 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.238475 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.238667 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.238869 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:43.238761287 +0000 UTC m=+38.271845515 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.264047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.264094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.264106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.264121 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.264133 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.294474 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.294541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.294558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.294580 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.294597 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.308585 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.313614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.313651 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.313662 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.313676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.313686 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.333314 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.337819 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.337856 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.337864 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.337877 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.337887 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.339125 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.339387 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:58.339359307 +0000 UTC m=+53.372443545 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.355025 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.359969 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.360017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.360034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.360057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.360074 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.377126 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.381648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.381687 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.381699 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.381715 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.381729 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.400101 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.400360 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.402309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.402350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.402362 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.402404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.402416 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.440467 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.440528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.440609 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.440641 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440733 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440745 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440772 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440792 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440808 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440819 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:58.440798972 +0000 UTC m=+53.473883200 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440905 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:58.440885144 +0000 UTC m=+53.473969382 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440927 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:58.440914965 +0000 UTC m=+53.473999203 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440975 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.441033 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.441061 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.441156 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:58.44113069 +0000 UTC m=+53.474214918 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.505503 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.505554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.505567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.505585 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.505598 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.608504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.608546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.608558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.608587 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.608598 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.647824 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:29:10.070003584 +0000 UTC Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.696135 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.696168 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.696209 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.696471 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.696558 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.696658 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.697225 4820 scope.go:117] "RemoveContainer" containerID="97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.710644 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.710688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.710701 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.710717 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.710729 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.813563 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.813607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.813619 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.813638 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.813652 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.915881 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.915929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.915941 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.915957 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.915968 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.967804 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.970075 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.970621 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.985736 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.004175 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.016540 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.018685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.018771 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.018787 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.018807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.018828 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.034513 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.052566 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.069870 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.120934 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.121134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.121163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.121174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.121189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.121199 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.141906 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.160305 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.176198 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.188581 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.209631 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.224982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.225015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.225023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.225036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.225044 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.225873 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.239429 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.250186 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:43 crc kubenswrapper[4820]: E0221 06:47:43.250353 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:43 crc kubenswrapper[4820]: E0221 06:47:43.250433 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:45.25040735 +0000 UTC m=+40.283491568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.258019 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.277387 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.328299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.328368 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.328383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.328715 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.328765 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.430763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.430808 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.430824 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.430843 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.430859 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.533527 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.533582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.533596 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.533615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.533626 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.636068 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.636121 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.636132 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.636152 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.636165 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.648183 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:39:25.127462827 +0000 UTC Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.695754 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:43 crc kubenswrapper[4820]: E0221 06:47:43.695896 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.738754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.738792 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.738799 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.738814 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.738824 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.841590 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.841622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.841630 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.841643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.841652 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.944460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.944513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.944527 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.944548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.944576 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.047224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.047363 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.047384 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.047410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.047429 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.150451 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.150483 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.150494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.150509 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.150522 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.253041 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.253088 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.253100 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.253117 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.253127 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.355822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.355856 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.355864 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.355877 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.355888 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.458803 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.458863 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.458883 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.458912 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.458934 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.561990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.562034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.562044 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.562059 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.562071 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.598418 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.599501 4820 scope.go:117] "RemoveContainer" containerID="c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7" Feb 21 06:47:44 crc kubenswrapper[4820]: E0221 06:47:44.599747 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.648759 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:47:25.735301785 +0000 UTC Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.668807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.668860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.668874 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.668892 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.668905 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.695798 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.695855 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.695797 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:44 crc kubenswrapper[4820]: E0221 06:47:44.695946 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:44 crc kubenswrapper[4820]: E0221 06:47:44.696054 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:44 crc kubenswrapper[4820]: E0221 06:47:44.696122 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.771867 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.771939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.771957 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.771983 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.771999 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.874752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.874797 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.874810 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.874828 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.874880 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.977230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.977471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.977493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.977518 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.977539 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.079798 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.079898 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.079915 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.079939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.079956 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.182532 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.182619 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.182643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.182677 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.182697 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.272789 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:45 crc kubenswrapper[4820]: E0221 06:47:45.272961 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:45 crc kubenswrapper[4820]: E0221 06:47:45.273023 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:49.273005757 +0000 UTC m=+44.306089965 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.284723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.284789 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.284808 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.284833 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.284851 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.387777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.387835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.387852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.387874 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.387891 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.490149 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.490219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.490263 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.490286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.490303 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.593231 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.593317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.593335 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.593358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.593374 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.649186 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 07:58:21.018246772 +0000 UTC Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.695850 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.695972 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.696026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.696043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.696064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.696090 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: E0221 06:47:45.696107 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.713298 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.731994 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.749849 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.766060 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.783491 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.798409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.798475 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.798498 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.798529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.798551 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.806125 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.821349 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.838822 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.854172 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.874375 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.894515 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.900675 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.900699 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.900708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.900721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.900730 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.914387 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.926212 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.936911 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.950146 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.980459 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.003347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.003392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.003403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.003421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.003433 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.106490 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.106550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.106567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.106587 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.106603 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.209755 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.209821 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.209839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.209869 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.209891 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.314818 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.314895 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.314916 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.314946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.314966 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.418492 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.418545 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.418559 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.418580 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.418594 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.521874 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.521939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.521959 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.521982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.522000 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.624552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.624626 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.624637 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.624654 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.624668 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.650328 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:00:15.967530985 +0000 UTC Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.696548 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.696558 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.696584 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:46 crc kubenswrapper[4820]: E0221 06:47:46.696925 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:46 crc kubenswrapper[4820]: E0221 06:47:46.696899 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:46 crc kubenswrapper[4820]: E0221 06:47:46.697013 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.728923 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.728962 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.728974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.728992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.729003 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.831905 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.832067 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.832103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.832184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.832289 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.936307 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.936402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.936420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.936473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.936491 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.038974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.039079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.039088 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.039104 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.039115 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.141900 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.141965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.141982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.142006 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.142023 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.246088 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.246143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.246159 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.246228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.246292 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.349448 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.349491 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.349502 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.349526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.349539 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.451642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.451706 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.451724 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.451748 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.451764 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.554129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.554184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.554200 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.554223 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.554262 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.650615 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:30:34.233894909 +0000 UTC Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.656935 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.656990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.657007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.657029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.657041 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.696652 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:47 crc kubenswrapper[4820]: E0221 06:47:47.696930 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.759106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.759139 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.759147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.759163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.759173 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.861474 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.861516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.861529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.861543 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.861555 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.966892 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.966936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.966947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.966970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.966981 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.069123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.069228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.069308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.069333 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.069351 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.171854 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.171892 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.171901 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.171915 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.171923 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.274529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.274599 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.274613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.274631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.274668 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.377760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.377829 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.377846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.377873 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.377892 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.480679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.480747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.480785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.480818 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.480842 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.583966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.584023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.584039 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.584064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.584087 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.651132 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:19:02.234532979 +0000 UTC Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.686947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.686995 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.687007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.687027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.687041 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.696475 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.696508 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:48 crc kubenswrapper[4820]: E0221 06:47:48.696624 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.696712 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:48 crc kubenswrapper[4820]: E0221 06:47:48.696916 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:48 crc kubenswrapper[4820]: E0221 06:47:48.697088 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.790271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.790359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.790379 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.790410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.790431 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.894070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.894130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.894148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.894175 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.894193 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.995949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.995996 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.996008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.996025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.996037 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.099946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.100172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.100201 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.100228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.100275 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.203046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.203093 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.203105 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.203130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.203141 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.306176 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.306230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.306280 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.306313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.306338 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.317157 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:49 crc kubenswrapper[4820]: E0221 06:47:49.317461 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:49 crc kubenswrapper[4820]: E0221 06:47:49.317654 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:57.317620126 +0000 UTC m=+52.350704364 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.409714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.409813 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.409848 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.409921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.409950 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.512504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.512599 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.512625 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.512660 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.512682 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.615825 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.615871 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.615883 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.615898 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.615909 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.651822 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:23:46.64606798 +0000 UTC Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.696743 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:49 crc kubenswrapper[4820]: E0221 06:47:49.696921 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.718171 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.718274 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.718287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.718302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.718315 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.820945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.820987 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.820998 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.821015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.821029 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.923970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.924028 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.924043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.924062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.924075 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.029290 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.029342 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.029360 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.029382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.029396 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.135627 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.135702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.135723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.135979 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.136002 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.239514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.239559 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.239574 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.239593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.239607 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.342822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.342882 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.342899 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.342922 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.342940 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.446393 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.446468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.446486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.446515 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.446537 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.549693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.550043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.550175 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.550361 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.550535 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.652877 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:32:22.721195384 +0000 UTC Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.654692 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.654736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.654753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.654776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.654794 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.696084 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.696084 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:50 crc kubenswrapper[4820]: E0221 06:47:50.696567 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.696138 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:50 crc kubenswrapper[4820]: E0221 06:47:50.697217 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:50 crc kubenswrapper[4820]: E0221 06:47:50.697562 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.757410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.757465 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.757479 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.757498 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.757512 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.861363 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.861420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.861437 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.861458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.861473 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.965627 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.965667 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.965679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.965696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.965708 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.067734 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.067790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.067805 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.067825 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.067842 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.170900 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.170982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.171007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.171033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.171050 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.273896 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.273965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.273982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.274006 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.274023 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.377309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.377386 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.377407 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.377434 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.377459 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.479966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.480017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.480034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.480053 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.480065 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.582879 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.582944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.582964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.582991 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.583009 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.653145 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:54:16.354922521 +0000 UTC Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.685672 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.685716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.685729 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.685747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.685762 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.696327 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:51 crc kubenswrapper[4820]: E0221 06:47:51.696507 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.788408 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.788480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.788506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.788533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.788583 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.892302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.892385 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.892411 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.892455 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.892478 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.995707 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.995764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.995829 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.995876 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.995893 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.099288 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.099352 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.099370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.099395 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.099413 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.203050 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.203101 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.203135 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.203152 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.203163 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.306337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.306382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.306394 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.306411 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.306422 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.408713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.408756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.408773 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.408795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.408810 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.512582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.512646 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.512668 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.512697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.512719 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.616774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.616892 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.616929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.616967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.617002 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.653554 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 09:19:17.049017446 +0000 UTC Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.695962 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.696133 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.695961 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.696297 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.696489 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.696633 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.720043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.720087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.720098 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.720114 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.720127 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.729901 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.730007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.730066 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.730134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.730185 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.745313 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:52Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.749878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.749943 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.749961 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.749992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.750013 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.765221 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:52Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.769910 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.770019 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.770114 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.770203 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.770326 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.789766 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:52Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.794042 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.794174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.794275 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.794373 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.794460 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.811719 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:52Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.814994 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.815035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.815051 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.815074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.815088 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.828932 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:52Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.829081 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.830727 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.830759 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.830797 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.830815 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.830831 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.933506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.933601 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.933631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.933671 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.933696 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.037144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.037219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.037232 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.037285 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.037303 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.140841 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.140895 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.140907 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.140927 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.140939 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.244581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.244649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.244666 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.244691 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.244713 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.347757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.347833 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.347855 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.347888 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.347911 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.451421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.451467 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.451480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.451519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.451533 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.554764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.554846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.554865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.554900 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.554931 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.654757 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:40:04.181766418 +0000 UTC Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.657430 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.657542 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.657562 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.657588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.657608 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.696193 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:53 crc kubenswrapper[4820]: E0221 06:47:53.696417 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.761265 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.761327 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.761343 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.761369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.761384 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.864699 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.864792 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.864820 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.864860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.864894 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.967701 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.967801 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.967876 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.967910 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.967932 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.071790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.071852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.071868 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.071921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.071936 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.175079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.175122 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.175132 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.175148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.175158 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.278973 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.279025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.279037 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.279055 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.279069 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.382600 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.382653 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.382667 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.382688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.382700 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.484814 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.484849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.484860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.484876 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.484888 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.494607 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.507663 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.527475 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.544873 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.561167 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.574486 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.587035 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.588129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.588275 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.588390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.588497 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.588580 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.601130 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.611301 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.622876 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.638100 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.650874 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.656523 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:24:09.176452722 +0000 UTC Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.664417 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.682654 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.690461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.690500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.690511 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.690528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.690540 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.696021 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.696067 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.696029 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:54 crc kubenswrapper[4820]: E0221 06:47:54.696177 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:54 crc kubenswrapper[4820]: E0221 06:47:54.696267 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:54 crc kubenswrapper[4820]: E0221 06:47:54.696319 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.697767 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.707863 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.718882 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.792837 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.792861 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.792871 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.792888 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.792923 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.895292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.895329 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.895337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.895353 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.895366 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.997678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.997878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.997951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.998023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.998105 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.100563 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.100601 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.100610 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.100624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.100635 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.202959 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.203003 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.203012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.203029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.203039 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.304939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.304983 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.304994 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.305009 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.305020 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.407427 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.407465 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.407473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.407487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.407495 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.509462 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.509731 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.509886 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.510064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.510265 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.612579 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.612619 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.612628 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.612640 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.612649 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.657785 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:12:33.137277364 +0000 UTC Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.695793 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:55 crc kubenswrapper[4820]: E0221 06:47:55.695913 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.711409 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.715302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.715414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.715435 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.715467 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.715488 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.726508 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.740966 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.754611 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.771255 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.782977 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.793286 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.803536 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.813594 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.818022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.818047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.818057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.818070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.818078 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.824226 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.834392 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.846481 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.855861 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.865655 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.876971 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.888989 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.919976 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.920054 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.920076 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.920100 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.920119 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.022473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.022516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.022564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.022581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.022595 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.110639 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.117084 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.120757 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.123998 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.124024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.124032 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.124043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.124052 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.129506 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.141398 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.152658 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.163635 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.176439 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.190201 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.201694 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.212765 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.224402 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.226142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.226186 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.226201 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.226222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.226262 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.238599 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.252292 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.270293 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.283568 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.293886 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.304077 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.329138 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.329297 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.329436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.329632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.329780 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.432214 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.432264 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.432291 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.432306 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.432316 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.534926 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.534972 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.534982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.534998 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.535008 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.637504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.637573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.637591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.637616 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.637633 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.658842 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:50:26.819825434 +0000 UTC Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.696042 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.696044 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.696089 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:56 crc kubenswrapper[4820]: E0221 06:47:56.696433 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:56 crc kubenswrapper[4820]: E0221 06:47:56.696617 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.696699 4820 scope.go:117] "RemoveContainer" containerID="c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7" Feb 21 06:47:56 crc kubenswrapper[4820]: E0221 06:47:56.696725 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.742008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.742057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.742068 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.742086 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.742097 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.844319 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.844475 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.844536 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.844599 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.844668 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.946452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.946506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.946517 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.946533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.946544 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.017221 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/1.log" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.019549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.020040 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.034765 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.049270 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.049312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.049321 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.049337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.049349 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.050447 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.068853 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.083968 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.099369 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.119531 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.129161 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.137916 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.148528 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.150894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.150942 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.150953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.150970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.150982 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.160260 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.172938 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.185663 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.198487 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.212543 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.227305 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.238646 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.250343 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.252675 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.252709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.252719 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.252733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.252742 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.354846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.354886 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.354897 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.354911 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.354921 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.403597 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:57 crc kubenswrapper[4820]: E0221 06:47:57.403745 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:57 crc kubenswrapper[4820]: E0221 06:47:57.403811 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:48:13.403794149 +0000 UTC m=+68.436878347 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.456696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.456741 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.456752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.456766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.456775 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.558868 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.559143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.559303 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.559431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.559540 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.659378 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 13:42:05.786308811 +0000 UTC Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.661346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.661492 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.661582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.661671 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.661779 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.696563 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:57 crc kubenswrapper[4820]: E0221 06:47:57.696887 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.764406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.764439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.764448 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.764460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.764468 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.866684 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.866732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.866741 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.866756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.866765 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.969170 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.969208 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.969219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.969256 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.969269 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.025094 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/2.log" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.025836 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/1.log" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.028834 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1" exitCode=1 Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.028876 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.028921 4820 scope.go:117] "RemoveContainer" containerID="c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.030123 4820 scope.go:117] "RemoveContainer" containerID="9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1" Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.030442 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.048432 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.059471 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.072119 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.072211 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.072229 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.072279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.072296 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.075257 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.089864 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.100462 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.112742 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.133773 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.145092 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.155174 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.167040 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.174733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.174771 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.174780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.174794 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.174803 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.182875 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.201060 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.212924 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.226701 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.237470 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.248653 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.259023 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.276949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.276982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.276992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.277008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.277016 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.379305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.379338 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.379346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.379359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.379369 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.414926 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.415072 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:48:30.415045061 +0000 UTC m=+85.448129259 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.481285 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.481321 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.481332 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.481349 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.481361 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.516225 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.516320 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.516381 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516420 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516477 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:48:30.516460944 +0000 UTC m=+85.549545162 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.516419 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516552 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516574 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516611 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516631 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:48:30.516605338 +0000 UTC m=+85.549689586 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516632 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516701 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:48:30.51668018 +0000 UTC m=+85.549764418 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516785 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516829 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516850 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516949 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:48:30.516917137 +0000 UTC m=+85.550001375 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.584124 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.584184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.584201 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.584222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.584258 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.660814 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:47:53.429659726 +0000 UTC Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.687385 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.687450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.687470 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.687494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.687511 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.695707 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.695768 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.695863 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.695710 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.696042 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.696187 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.790292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.790342 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.790354 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.790371 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.790383 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.892979 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.893020 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.893031 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.893047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.893059 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.995211 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.995272 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.995286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.995302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.995314 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.033967 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/2.log" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.037735 4820 scope.go:117] "RemoveContainer" containerID="9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1" Feb 21 06:47:59 crc kubenswrapper[4820]: E0221 06:47:59.037935 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.053117 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.065428 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.077146 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.086687 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.095744 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.097111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.097155 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.097165 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.097181 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.097192 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.112473 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.123109 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.137290 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.148252 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.160800 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.174659 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.190260 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.200318 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.200365 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.200378 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.200396 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.200412 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.204295 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.215184 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.224933 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.237361 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.253936 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.303544 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.303640 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.303651 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.303665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.303675 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.406834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.406876 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.406886 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.406901 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.406912 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.509671 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.509713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.509723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.509737 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.509747 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.612431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.612461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.612469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.612482 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.612491 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.661465 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:00:06.831484886 +0000 UTC Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.695894 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:59 crc kubenswrapper[4820]: E0221 06:47:59.696044 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.714849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.714885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.714894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.714906 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.714916 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.817460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.817515 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.817646 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.817670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.817691 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.920472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.920534 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.920551 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.920623 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.920652 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.023495 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.023543 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.023560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.023582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.023598 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.125228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.125310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.125360 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.125380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.125389 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.227551 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.227619 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.227638 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.227664 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.227686 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.329879 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.329919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.329929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.329944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.329956 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.433223 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.433266 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.433274 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.433286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.433294 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.535586 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.535652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.535673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.535697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.535715 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.638071 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.638120 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.638132 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.638149 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.638161 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.662291 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:33:58.855021547 +0000 UTC Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.695657 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.695695 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:00 crc kubenswrapper[4820]: E0221 06:48:00.695812 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.695885 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:00 crc kubenswrapper[4820]: E0221 06:48:00.696023 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:00 crc kubenswrapper[4820]: E0221 06:48:00.696138 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.740774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.740810 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.740822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.740838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.740849 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.843073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.843107 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.843119 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.843135 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.843148 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.946099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.946162 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.946180 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.946204 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.946221 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.048823 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.048871 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.048886 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.048949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.048966 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.151101 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.151174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.151200 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.151230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.151299 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.253528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.253623 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.253639 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.253659 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.253673 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.355785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.355848 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.355863 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.355881 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.355893 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.458868 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.458908 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.458919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.458936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.458946 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.561414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.561450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.561459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.561472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.561481 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.662440 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:42:49.145370183 +0000 UTC Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.664458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.664486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.664496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.664514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.664524 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.696426 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:01 crc kubenswrapper[4820]: E0221 06:48:01.696584 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.766782 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.766838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.766845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.766858 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.766869 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.870002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.870069 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.870086 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.870113 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.870132 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.971964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.972001 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.972009 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.972022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.972030 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.074866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.074934 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.074944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.074962 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.074971 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.177377 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.177444 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.177463 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.177488 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.177506 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.281207 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.281313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.281341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.281369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.281387 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.385173 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.385271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.385290 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.385314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.385331 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.487316 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.487381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.487404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.487429 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.487446 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.590171 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.590229 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.590281 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.590305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.590321 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.663560 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:32:20.986726129 +0000 UTC Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.693033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.693066 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.693074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.693087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.693095 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.696509 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:02 crc kubenswrapper[4820]: E0221 06:48:02.696632 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.696514 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.696739 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:02 crc kubenswrapper[4820]: E0221 06:48:02.697006 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:02 crc kubenswrapper[4820]: E0221 06:48:02.697100 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.795997 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.796036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.796044 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.796058 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.796066 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.898953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.899002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.899013 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.899026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.899034 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.001478 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.001535 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.001546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.001560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.001569 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.104887 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.105471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.105504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.105524 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.105535 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.141777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.141860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.141883 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.141915 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.141938 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: E0221 06:48:03.184166 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:03Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.194603 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.194685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.194709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.194737 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.194761 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: E0221 06:48:03.216283 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:03Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.221995 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.222041 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.222054 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.222073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.222087 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: E0221 06:48:03.240719 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:03Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.245682 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.245740 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.245750 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.245767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.245779 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: E0221 06:48:03.263455 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:03Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.269019 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.269099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.269122 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.269156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.269178 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: E0221 06:48:03.291753 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:03Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:03 crc kubenswrapper[4820]: E0221 06:48:03.291986 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.294080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.294118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.294126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.294143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.294157 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.396491 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.396542 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.396552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.396569 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.396579 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.499938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.500006 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.500025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.500052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.500071 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.603226 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.603313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.603327 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.603347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.603361 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.664130 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:41:31.406071964 +0000 UTC Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.696155 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:03 crc kubenswrapper[4820]: E0221 06:48:03.696310 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.705158 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.705206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.705219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.705251 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.705265 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.808494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.808535 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.808547 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.808563 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.808575 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.911084 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.911134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.911149 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.911166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.911178 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.014694 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.014783 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.014816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.014845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.014867 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.118454 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.118531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.118553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.118583 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.118608 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.222187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.222262 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.222276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.222299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.222318 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.325441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.325508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.325532 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.325556 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.325573 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.428115 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.428164 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.428173 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.428192 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.428202 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.531017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.531095 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.531119 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.531146 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.531162 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.634941 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.635040 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.635066 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.635103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.635131 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.665356 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:12:21.318623435 +0000 UTC Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.696828 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:04 crc kubenswrapper[4820]: E0221 06:48:04.697047 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.696840 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:04 crc kubenswrapper[4820]: E0221 06:48:04.697141 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.696838 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:04 crc kubenswrapper[4820]: E0221 06:48:04.697210 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.738717 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.738788 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.738807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.738839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.738861 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.842804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.842872 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.842894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.842923 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.842945 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.945785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.945838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.945850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.945864 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.945874 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.049032 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.049099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.049119 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.049138 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.049154 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.152017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.152083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.152095 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.152115 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.152127 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.253969 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.254026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.254041 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.254065 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.254081 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.356029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.356126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.356141 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.356165 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.356179 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.459018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.459062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.459073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.459087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.459097 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.566034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.566118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.566138 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.566167 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.566191 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.666194 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 17:44:50.639754373 +0000 UTC Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.669798 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.669897 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.669924 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.669964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.669990 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.695726 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:05 crc kubenswrapper[4820]: E0221 06:48:05.696020 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.711878 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.734834 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.748004 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.763823 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.773022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.773217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.773334 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.773414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.773480 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.781734 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.796575 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.807280 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.821474 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.842357 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.856905 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.876620 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.876658 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.876667 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.876681 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.876694 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.877130 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.894121 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.915391 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.964408 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.978357 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.978412 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.978431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.978458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.978478 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.986537 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.004802 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:06Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.020884 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:06Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.082452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.082513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.082526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.082553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.082568 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.186662 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.186728 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.186744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.186766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.186778 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.290228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.290791 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.290804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.290838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.290851 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.394377 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.394456 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.394481 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.394516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.394538 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.497817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.497889 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.497908 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.497931 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.497947 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.601308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.601342 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.601350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.601363 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.601371 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.667128 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:33:05.232535451 +0000 UTC Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.696573 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.696661 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:06 crc kubenswrapper[4820]: E0221 06:48:06.696713 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.696574 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:06 crc kubenswrapper[4820]: E0221 06:48:06.696856 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:06 crc kubenswrapper[4820]: E0221 06:48:06.697142 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.704820 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.704858 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.704870 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.704887 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.704900 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.810095 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.810187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.810222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.810276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.810298 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.914136 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.914190 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.914199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.914215 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.914225 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.016478 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.016520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.016531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.016544 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.016553 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.119271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.119302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.119310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.119324 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.119332 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.221598 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.221629 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.221638 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.221650 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.221660 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.324076 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.324133 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.324142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.324166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.324177 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.427652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.427704 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.427716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.427733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.427745 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.530785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.530839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.530847 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.530861 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.530871 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.632585 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.632622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.632632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.632646 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.632657 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.668078 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:03:20.991104798 +0000 UTC Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.696525 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:07 crc kubenswrapper[4820]: E0221 06:48:07.696653 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.735587 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.735654 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.735671 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.735694 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.735712 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.839112 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.839172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.839189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.839213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.839230 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.942484 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.942542 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.942554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.942573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.942586 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.045428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.045487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.045500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.045517 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.045529 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.147716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.147788 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.147802 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.147819 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.147830 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.250383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.250440 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.250452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.250469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.250481 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.352812 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.352869 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.352885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.352910 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.352926 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.454588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.454624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.454632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.454645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.454653 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.557207 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.557340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.557366 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.557396 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.557417 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.660198 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.660250 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.660260 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.660276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.660286 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.668833 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:37:33.114606098 +0000 UTC Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.696178 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.696220 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.696273 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:08 crc kubenswrapper[4820]: E0221 06:48:08.696331 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:08 crc kubenswrapper[4820]: E0221 06:48:08.696717 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:08 crc kubenswrapper[4820]: E0221 06:48:08.696752 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.762312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.762346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.762356 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.762370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.762379 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.865925 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.865988 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.866005 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.866029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.866046 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.968587 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.968638 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.968659 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.968684 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.968701 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.070421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.070500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.070517 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.070541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.070558 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.172676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.172705 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.172713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.172725 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.172734 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.275315 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.275390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.275413 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.275443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.275466 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.379211 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.379320 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.379347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.379377 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.379398 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.482103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.482150 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.482162 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.482179 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.482191 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.585045 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.585094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.585106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.585123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.585137 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.668936 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 01:54:03.72796601 +0000 UTC Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.689349 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.689409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.689418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.689433 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.689444 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.695711 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:09 crc kubenswrapper[4820]: E0221 06:48:09.695833 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.791908 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.791945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.791955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.791968 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.791978 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.894904 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.894936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.894948 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.894965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.894978 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.997226 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.997295 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.997310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.997328 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.997338 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.099520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.099553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.099561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.099573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.099582 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.201655 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.201685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.201693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.201705 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.201714 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.303265 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.303311 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.303323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.303342 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.303355 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.405626 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.405667 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.405678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.405694 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.405718 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.508151 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.508200 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.508213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.508232 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.508269 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.609933 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.609993 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.610007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.610027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.610043 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.669669 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:49:36.50189572 +0000 UTC Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.696145 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:10 crc kubenswrapper[4820]: E0221 06:48:10.696292 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.696379 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.696384 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:10 crc kubenswrapper[4820]: E0221 06:48:10.696667 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:10 crc kubenswrapper[4820]: E0221 06:48:10.696838 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.712123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.712177 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.712187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.712201 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.712211 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.815508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.815546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.815559 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.815573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.815584 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.917634 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.917672 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.917680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.917695 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.917704 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.020785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.020846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.020865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.020891 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.020926 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.122809 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.122932 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.122944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.122958 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.122966 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.225231 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.225284 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.225297 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.225313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.225328 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.327733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.327789 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.327805 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.327827 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.327843 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.430485 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.430536 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.430552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.430571 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.430585 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.534112 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.534156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.534164 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.534179 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.534188 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.636494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.636551 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.636565 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.636583 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.636595 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.670284 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:27:25.308896907 +0000 UTC Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.696637 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:11 crc kubenswrapper[4820]: E0221 06:48:11.696779 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.739358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.739402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.739415 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.739434 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.739451 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.841301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.841338 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.841347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.841361 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.841370 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.943309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.943343 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.943351 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.943365 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.943375 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.045379 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.045454 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.045470 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.045486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.045498 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.147685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.147760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.147778 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.147801 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.147820 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.250834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.250894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.250903 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.250923 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.250950 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.353189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.353225 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.353235 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.353265 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.353276 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.455472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.455572 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.455592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.455618 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.455634 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.557216 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.557286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.557299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.557317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.557331 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.659591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.659637 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.659648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.659663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.659673 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.671288 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:44:55.181374255 +0000 UTC Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.696570 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.696602 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.696665 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:12 crc kubenswrapper[4820]: E0221 06:48:12.696733 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:12 crc kubenswrapper[4820]: E0221 06:48:12.696824 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:12 crc kubenswrapper[4820]: E0221 06:48:12.697188 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.697806 4820 scope.go:117] "RemoveContainer" containerID="9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1" Feb 21 06:48:12 crc kubenswrapper[4820]: E0221 06:48:12.698072 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.762294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.762328 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.762337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.762352 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.762361 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.864673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.864713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.864722 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.864738 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.864748 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.967533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.967584 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.967594 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.967611 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.967621 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.070595 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.070623 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.070635 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.070647 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.070660 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.172879 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.172921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.172953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.172976 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.172987 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.274816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.274855 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.274867 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.274885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.274898 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.376900 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.376931 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.376938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.376951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.376959 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.473696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.473721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.473729 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.473741 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.473751 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.484623 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.484759 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.484815 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:48:45.484797715 +0000 UTC m=+100.517881923 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.484852 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:13Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.488711 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.488745 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.488757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.488772 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.488784 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.499251 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:13Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.502173 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.502198 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.502208 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.502221 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.502229 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.513204 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:13Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.515982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.516018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.516030 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.516050 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.516059 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.531443 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:13Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.534590 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.534634 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.534647 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.534665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.534677 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.546322 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:13Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.546432 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.547608 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.547635 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.547643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.547656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.547664 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.650446 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.650484 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.650493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.650508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.650517 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.672172 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:33:03.509682071 +0000 UTC Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.696727 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.696846 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.752899 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.752943 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.752958 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.752973 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.752985 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.854834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.854870 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.854882 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.854895 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.854905 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.957282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.957318 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.957329 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.957344 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.957356 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.059826 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.059940 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.059957 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.059972 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.059983 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.163123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.163188 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.163202 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.163219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.163229 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.265190 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.265258 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.265266 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.265280 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.265288 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.367535 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.367573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.367584 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.367599 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.367610 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.469927 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.469978 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.469991 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.470008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.470019 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.572567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.572606 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.572615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.572629 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.572639 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.672902 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:25:18.208493641 +0000 UTC Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.674395 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.674439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.674453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.674473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.674495 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.695820 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.695824 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.696034 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:14 crc kubenswrapper[4820]: E0221 06:48:14.696119 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:14 crc kubenswrapper[4820]: E0221 06:48:14.696285 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:14 crc kubenswrapper[4820]: E0221 06:48:14.696347 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.706118 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.777056 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.777096 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.777105 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.777118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.777127 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.879507 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.879567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.879583 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.879604 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.879620 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.981616 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.981656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.981667 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.981683 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.981694 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.084508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.084537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.084546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.084560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.084570 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.086385 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/0.log" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.086413 4820 generic.go:334] "Generic (PLEG): container finished" podID="abdb469c-ba72-4790-9ce3-785f4facbcb9" containerID="27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a" exitCode=1 Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.086750 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerDied","Data":"27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.086950 4820 scope.go:117] "RemoveContainer" containerID="27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.097954 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:14Z\\\",\\\"message\\\":\\\"2026-02-21T06:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b\\\\n2026-02-21T06:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b to /host/opt/cni/bin/\\\\n2026-02-21T06:47:29Z [verbose] multus-daemon started\\\\n2026-02-21T06:47:29Z [verbose] Readiness Indicator file check\\\\n2026-02-21T06:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.109975 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.120155 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.133606 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.149138 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.158376 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.170011 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.186873 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.186914 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.186926 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.186945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.186957 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.190693 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.203639 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.213856 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.226266 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.235560 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.243763 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.253734 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.264632 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.274270 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.287207 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.288638 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.288740 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.288818 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.288886 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.288964 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.298525 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.390830 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.390865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.390873 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.390885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.390894 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.492696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.492743 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.492765 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.492784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.492793 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.595153 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.595199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.595212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.595229 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.595262 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.673935 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:42:22.748935318 +0000 UTC Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.696519 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:15 crc kubenswrapper[4820]: E0221 06:48:15.696635 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.697902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.697941 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.697951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.697967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.697976 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.714326 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.726258 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.737361 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.748906 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.759224 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.773522 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.789761 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.800410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.800468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.800486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.800510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.800528 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.802861 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.814495 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.826797 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.837396 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:14Z\\\",\\\"message\\\":\\\"2026-02-21T06:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b\\\\n2026-02-21T06:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b to /host/opt/cni/bin/\\\\n2026-02-21T06:47:29Z [verbose] multus-daemon started\\\\n2026-02-21T06:47:29Z [verbose] Readiness Indicator file check\\\\n2026-02-21T06:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.849315 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.865478 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.877292 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.888108 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.897589 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.901948 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.902052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.902126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.902206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.902312 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.908620 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.918480 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.004513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.004795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.004860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.004924 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.004986 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.091669 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/0.log" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.091954 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerStarted","Data":"e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.106902 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.107557 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.107590 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.107631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.107649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.107659 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.120076 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.131819 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.144204 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.155662 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.165790 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.176961 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.190101 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.204024 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.209464 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.209500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.209509 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.209522 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.209530 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.214319 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.227165 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.241566 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.255316 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:14Z\\\",\\\"message\\\":\\\"2026-02-21T06:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b\\\\n2026-02-21T06:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b to /host/opt/cni/bin/\\\\n2026-02-21T06:47:29Z [verbose] multus-daemon started\\\\n2026-02-21T06:47:29Z [verbose] Readiness Indicator file check\\\\n2026-02-21T06:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.269677 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.287098 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.300045 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.310833 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.311838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.311878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.311891 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.311909 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.311922 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.320305 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.413762 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.413828 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.413852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.413881 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.413904 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.515479 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.515513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.515522 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.515534 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.515544 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.618103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.618166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.618187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.618212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.618232 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.674560 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:37:39.241883751 +0000 UTC Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.696036 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:16 crc kubenswrapper[4820]: E0221 06:48:16.696300 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.696562 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.696557 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:16 crc kubenswrapper[4820]: E0221 06:48:16.696694 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:16 crc kubenswrapper[4820]: E0221 06:48:16.696876 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.722073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.722138 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.722156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.722204 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.722234 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.825650 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.825706 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.825723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.825751 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.825773 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.927840 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.927903 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.927920 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.927948 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.927967 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.030540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.030592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.030605 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.030623 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.030638 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.132275 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.132308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.132319 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.132334 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.132346 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.235209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.235268 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.235282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.235297 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.235308 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.337902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.337951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.337969 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.337992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.338009 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.439878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.439979 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.440000 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.440025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.440042 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.541929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.541965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.541975 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.541990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.542000 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.644555 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.644601 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.644612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.644628 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.644640 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.674840 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:40:03.052126474 +0000 UTC Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.696662 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:17 crc kubenswrapper[4820]: E0221 06:48:17.697011 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.747557 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.747607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.747619 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.747638 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.747650 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.850422 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.850471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.850482 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.850499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.850512 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.952663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.952727 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.952744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.952768 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.952785 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.055433 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.055541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.055570 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.055604 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.055626 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.158611 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.158666 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.158679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.158697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.158708 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.262536 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.262657 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.262669 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.262685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.262694 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.364800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.364839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.364848 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.364862 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.364870 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.467332 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.467370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.467380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.467394 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.467403 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.569636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.569706 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.569720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.569743 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.569759 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.671670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.671709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.671720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.671738 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.671749 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.675816 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:56:18.424829213 +0000 UTC Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.696180 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:18 crc kubenswrapper[4820]: E0221 06:48:18.696335 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.696377 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.696380 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:18 crc kubenswrapper[4820]: E0221 06:48:18.696467 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:18 crc kubenswrapper[4820]: E0221 06:48:18.696502 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.774042 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.774082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.774094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.774110 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.774122 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.876710 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.876745 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.876753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.876769 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.876778 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.979011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.979059 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.979071 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.979088 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.979100 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.081753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.081793 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.081810 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.081832 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.081848 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.184060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.184111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.184125 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.184142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.184154 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.286365 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.286399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.286407 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.286421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.286430 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.389012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.389063 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.389074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.389090 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.389102 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.490868 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.490915 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.490925 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.490943 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.490955 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.593890 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.593939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.593955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.593973 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.593987 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.676357 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 05:50:55.983373237 +0000 UTC Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.695914 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:19 crc kubenswrapper[4820]: E0221 06:48:19.696024 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.696399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.696426 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.696438 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.696452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.696463 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.798432 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.798490 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.798508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.798530 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.798546 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.901629 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.901656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.901665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.901679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.901688 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.004063 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.004356 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.004434 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.004507 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.004637 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.106757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.106805 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.106818 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.106835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.106850 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.209613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.209655 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.209664 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.209678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.209688 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.312170 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.312567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.312744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.312927 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.313065 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.416392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.416424 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.416434 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.416448 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.416457 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.518974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.519010 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.519018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.519031 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.519039 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.621458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.621492 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.621504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.621518 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.621526 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.676825 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 15:37:30.36137705 +0000 UTC Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.696171 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.696215 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:20 crc kubenswrapper[4820]: E0221 06:48:20.696401 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.696265 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:20 crc kubenswrapper[4820]: E0221 06:48:20.696516 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:20 crc kubenswrapper[4820]: E0221 06:48:20.696739 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.724122 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.724175 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.724192 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.724218 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.724267 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.825774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.825814 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.825824 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.825838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.825849 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.928439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.928506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.928515 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.928535 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.928545 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.030481 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.030520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.030531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.030546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.030557 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.131974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.132025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.132039 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.132055 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.132066 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.234487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.234516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.234524 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.234537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.234545 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.336898 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.336966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.336989 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.337018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.337040 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.439317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.439348 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.439359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.439372 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.439383 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.541762 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.541849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.541866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.541882 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.541896 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.644847 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.644902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.644924 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.644951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.644972 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.677324 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:24:43.519180371 +0000 UTC Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.695601 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:21 crc kubenswrapper[4820]: E0221 06:48:21.695793 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.746929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.746974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.747007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.747019 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.747027 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.848831 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.849080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.849146 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.849213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.849334 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.952079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.952137 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.952148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.952162 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.952170 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.054984 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.055022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.055033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.055049 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.055063 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.157840 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.157898 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.157912 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.157932 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.157947 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.260459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.260491 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.260499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.260513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.260522 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.363401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.363448 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.363456 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.363469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.363477 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.466123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.466166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.466177 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.466193 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.466207 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.569783 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.569855 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.569877 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.569902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.569928 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.672614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.672639 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.672648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.672660 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.672668 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.678309 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:21:58.44912321 +0000 UTC Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.696422 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.696471 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:22 crc kubenswrapper[4820]: E0221 06:48:22.696507 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:22 crc kubenswrapper[4820]: E0221 06:48:22.696688 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.696814 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:22 crc kubenswrapper[4820]: E0221 06:48:22.697039 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.775060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.775129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.775156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.775186 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.775208 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.878384 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.878450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.878460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.878494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.878505 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.981141 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.981203 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.981223 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.981279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.981298 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.083129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.083163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.083174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.083189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.083199 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.185894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.186002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.186024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.186049 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.186068 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.288852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.288939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.288962 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.288992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.289016 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.391216 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.391275 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.391287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.391301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.391311 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.493800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.493845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.493860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.493897 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.493912 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.596011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.596088 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.596112 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.596143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.596166 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.643461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.643540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.643565 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.643594 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.643616 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: E0221 06:48:23.662177 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:23Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.666831 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.666910 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.666935 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.666966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.666989 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.678647 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:05:17.20282179 +0000 UTC Feb 21 06:48:23 crc kubenswrapper[4820]: E0221 06:48:23.680913 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:23Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.684693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.684776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.684790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.684807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.685182 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.695985 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:23 crc kubenswrapper[4820]: E0221 06:48:23.696136 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:23 crc kubenswrapper[4820]: E0221 06:48:23.704167 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:23Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.707980 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.708039 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.708058 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.708082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.708101 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: E0221 06:48:23.720956 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:23Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.725313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.725402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.725421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.725446 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.725463 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: E0221 06:48:23.740347 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:23Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:23 crc kubenswrapper[4820]: E0221 06:48:23.740473 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.741971 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.742003 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.742012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.742026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.742038 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.844807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.844864 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.844881 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.844907 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.844927 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.947417 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.947487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.947510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.947539 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.947562 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.050282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.050339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.050355 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.050378 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.050397 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.153504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.153539 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.153548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.153561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.153572 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.256460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.256499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.256512 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.256529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.256541 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.359488 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.359570 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.359592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.359615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.359636 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.462142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.462195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.462215 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.462312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.462422 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.564866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.565183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.565337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.565459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.565548 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.668834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.668931 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.668960 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.668999 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.669027 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.679457 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:17:38.511263478 +0000 UTC Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.695982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.696103 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.696296 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:24 crc kubenswrapper[4820]: E0221 06:48:24.696294 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:24 crc kubenswrapper[4820]: E0221 06:48:24.696389 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:24 crc kubenswrapper[4820]: E0221 06:48:24.697171 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.697894 4820 scope.go:117] "RemoveContainer" containerID="9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.772148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.772205 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.772217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.772261 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.772278 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.874653 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.874713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.874723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.874736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.874746 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.977359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.977406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.977417 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.977432 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.977442 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.079761 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.079799 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.079811 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.079827 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.079840 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.118035 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/2.log" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.120723 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.121165 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.145495 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.163289 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.178284 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.182086 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.182163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.182183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.182209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.182285 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.197211 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.220046 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.251826 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.264956 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.277598 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.284327 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.284370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.284382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.284402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.284415 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.295625 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.307427 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:14Z\\\",\\\"message\\\":\\\"2026-02-21T06:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b\\\\n2026-02-21T06:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b to /host/opt/cni/bin/\\\\n2026-02-21T06:47:29Z [verbose] multus-daemon started\\\\n2026-02-21T06:47:29Z [verbose] Readiness Indicator file check\\\\n2026-02-21T06:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.325903 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.336372 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.349810 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.369417 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.380189 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.386756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.386797 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.386811 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.386827 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.386839 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.394945 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.411729 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.422609 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.488904 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.488944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.488956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.488983 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.488998 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.591612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.591685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.591708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.591736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.591758 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.681421 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:23:38.869580198 +0000 UTC Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.694438 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.694504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.694523 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.694548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.694566 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.695667 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:25 crc kubenswrapper[4820]: E0221 06:48:25.695847 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.717731 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.737156 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.751490 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.761281 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.776059 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.789759 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.796363 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.796415 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.796425 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.796457 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.796469 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.806925 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.825901 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.843619 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.864088 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.882385 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:14Z\\\",\\\"message\\\":\\\"2026-02-21T06:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b\\\\n2026-02-21T06:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b to /host/opt/cni/bin/\\\\n2026-02-21T06:47:29Z [verbose] multus-daemon started\\\\n2026-02-21T06:47:29Z [verbose] Readiness Indicator file check\\\\n2026-02-21T06:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.895570 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.899217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.899297 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.899308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.899322 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.899331 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.926157 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.942221 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.953833 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.966565 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.980451 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.995071 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.002200 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.002231 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.002260 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.002279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.002290 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.104176 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.104217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.104226 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.104252 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.104261 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.124577 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/3.log" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.125381 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/2.log" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.128019 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" exitCode=1 Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.128060 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.128095 4820 scope.go:117] "RemoveContainer" containerID="9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.128729 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:48:26 crc kubenswrapper[4820]: E0221 06:48:26.128978 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.142491 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.160494 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.176513 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.190637 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.203308 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.207693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.207747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.207763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.207786 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.207804 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.216273 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.230886 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.250478 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.262891 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.274597 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.289028 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.306122 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:14Z\\\",\\\"message\\\":\\\"2026-02-21T06:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b\\\\n2026-02-21T06:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b to /host/opt/cni/bin/\\\\n2026-02-21T06:47:29Z [verbose] multus-daemon started\\\\n2026-02-21T06:47:29Z [verbose] Readiness Indicator file check\\\\n2026-02-21T06:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.309803 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.309850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.309876 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.309891 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.309900 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.325004 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.350072 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:25Z\\\",\\\"message\\\":\\\"empt(s)\\\\nI0221 06:48:25.634767 6918 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0221 06:48:25.634977 6918 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 06:48:25.634798 6918 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0221 06:48:25.635097 6918 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0221 06:48:25.635152 6918 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0221 06:48:25.635208 6918 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0221 06:48:25.634983 6918 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0221 06:48:25.635347 6918 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0221 06:48:25.635123 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.363551 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.376176 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.387653 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.402687 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.412974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.413011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.413021 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.413040 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.413053 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.515564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.515619 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.515641 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.515661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.515676 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.617392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.617449 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.617473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.617505 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.617528 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.682134 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 19:11:00.672797712 +0000 UTC Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.696493 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.696526 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.696613 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:26 crc kubenswrapper[4820]: E0221 06:48:26.696622 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:26 crc kubenswrapper[4820]: E0221 06:48:26.696778 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:26 crc kubenswrapper[4820]: E0221 06:48:26.696906 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.720386 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.720445 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.720464 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.720488 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.720507 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.822880 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.822924 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.822939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.822962 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.822979 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.925019 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.925052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.925061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.925073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.925081 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.027308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.027376 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.027393 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.027417 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.027434 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.130233 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.130315 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.130332 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.130353 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.130370 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.132351 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/3.log" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.135894 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:48:27 crc kubenswrapper[4820]: E0221 06:48:27.136115 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.148146 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.163109 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.176186 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.188006 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.199519 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.209837 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.224045 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.232339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.232388 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.232403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.232425 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.232440 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.236958 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.250589 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.262875 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.282521 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.296998 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:14Z\\\",\\\"message\\\":\\\"2026-02-21T06:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b\\\\n2026-02-21T06:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b to /host/opt/cni/bin/\\\\n2026-02-21T06:47:29Z [verbose] multus-daemon started\\\\n2026-02-21T06:47:29Z [verbose] Readiness Indicator file check\\\\n2026-02-21T06:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.317050 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.334902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.334942 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.334954 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.334968 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.334979 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.339333 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:25Z\\\",\\\"message\\\":\\\"empt(s)\\\\nI0221 06:48:25.634767 6918 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0221 06:48:25.634977 6918 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 06:48:25.634798 6918 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0221 06:48:25.635097 6918 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0221 06:48:25.635152 6918 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0221 06:48:25.635208 6918 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0221 06:48:25.634983 6918 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0221 06:48:25.635347 6918 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0221 06:48:25.635123 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:48:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.352021 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.362204 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.373963 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.389558 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.437925 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.437987 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.438007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.438036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.438059 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.540757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.540800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.540808 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.540822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.540834 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.643350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.643409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.643424 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.643445 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.643459 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.683012 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:50:25.716783025 +0000 UTC Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.696705 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:27 crc kubenswrapper[4820]: E0221 06:48:27.696901 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.746287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.746330 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.746342 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.746358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.746370 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.848767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.848816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.848835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.848861 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.848884 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.952144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.952616 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.952641 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.952670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.952692 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.054910 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.054988 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.055009 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.055035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.055052 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.158341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.158381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.158390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.158403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.158412 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.261314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.261354 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.261364 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.261380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.261391 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.363882 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.363939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.363955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.363975 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.363990 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.466850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.466909 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.466920 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.466937 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.466951 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.569492 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.569541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.569554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.569589 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.569598 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.672455 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.672488 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.672497 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.672511 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.672520 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.683935 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:08:33.464087704 +0000 UTC Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.696487 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.696550 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.696573 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:28 crc kubenswrapper[4820]: E0221 06:48:28.696610 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:28 crc kubenswrapper[4820]: E0221 06:48:28.696745 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:28 crc kubenswrapper[4820]: E0221 06:48:28.696955 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.775035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.775102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.775120 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.775144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.775162 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.877664 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.877709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.877726 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.877749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.877765 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.980867 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.980905 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.980914 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.980928 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.980937 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.084508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.084552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.084566 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.084585 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.084597 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.187299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.187346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.187359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.187380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.187411 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.289819 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.289901 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.289936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.289967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.289985 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.392620 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.392698 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.392723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.392752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.392771 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.499053 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.499130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.499151 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.499185 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.499208 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.603067 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.603119 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.603130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.603147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.603160 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.684853 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:04:06.370953305 +0000 UTC Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.696394 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:29 crc kubenswrapper[4820]: E0221 06:48:29.696567 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.705329 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.705390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.705406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.705428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.705445 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.808697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.808753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.808766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.808784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.808800 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.911351 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.911412 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.911429 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.911453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.911472 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.014178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.014262 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.014282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.014306 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.014324 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.117157 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.117189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.117199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.117213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.117222 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.219581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.219659 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.219714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.219747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.219769 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.322609 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.322720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.322734 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.322749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.322759 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.425850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.425919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.425947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.425977 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.426003 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.460543 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.460824 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.460783992 +0000 UTC m=+149.493868230 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.529403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.529476 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.529495 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.529523 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.529544 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.562400 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.562482 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.562527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.562564 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562643 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562759 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562783 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562804 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562820 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562832 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562843 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562796 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.562755124 +0000 UTC m=+149.595839372 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562900 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562926 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.562902309 +0000 UTC m=+149.595986547 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.563155 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.563116515 +0000 UTC m=+149.596200763 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.563192 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.563174796 +0000 UTC m=+149.596259034 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.632095 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.632407 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.632585 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.632731 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.632866 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.686064 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:08:16.079601612 +0000 UTC Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.696493 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.696521 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.696633 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.696693 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.696790 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.696969 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.735589 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.735665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.735683 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.735716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.735734 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.838660 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.838724 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.838747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.838777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.838799 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.941859 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.941913 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.941933 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.941956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.941973 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.045036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.045078 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.045087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.045102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.045112 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.148118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.148279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.148306 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.148337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.148359 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.250647 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.250686 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.250694 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.250707 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.250716 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.353185 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.353219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.353227 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.353269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.353278 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.456548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.456593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.456604 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.456622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.456635 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.559195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.559289 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.559307 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.559331 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.559351 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.662621 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.662697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.662723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.662754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.662773 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.687546 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:53:48.010932744 +0000 UTC Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.695982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:31 crc kubenswrapper[4820]: E0221 06:48:31.696175 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.765956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.766027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.766046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.766073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.766092 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.869593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.869661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.869678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.869703 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.869719 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.972333 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.972386 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.972404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.972426 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.972444 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.075682 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.075740 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.075758 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.075780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.075797 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.177697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.177760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.177777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.177799 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.177817 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.280887 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.280946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.280964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.280987 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.281004 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.384079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.384121 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.384131 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.384148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.384157 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.487011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.487080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.487100 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.487126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.487143 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.591349 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.591406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.591427 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.591450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.591466 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.688730 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:17:42.36893287 +0000 UTC Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.694018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.694092 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.694115 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.694151 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.694174 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.696545 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.696674 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:32 crc kubenswrapper[4820]: E0221 06:48:32.696845 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.696915 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:32 crc kubenswrapper[4820]: E0221 06:48:32.697067 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:32 crc kubenswrapper[4820]: E0221 06:48:32.697226 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.797367 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.797428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.797445 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.797469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.797485 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.900300 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.900382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.900399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.900423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.900441 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.003259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.003301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.003310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.003324 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.003336 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.105606 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.105647 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.105657 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.105670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.105681 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.208058 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.208458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.208477 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.208500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.208517 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.311064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.311122 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.311137 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.311156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.311171 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.414599 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.414649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.414661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.414676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.414687 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.516936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.516985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.516996 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.517013 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.517025 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.620469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.620530 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.620547 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.620571 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.620618 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.689284 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:03:29.676898043 +0000 UTC Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.695732 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:33 crc kubenswrapper[4820]: E0221 06:48:33.695882 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.723035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.723098 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.723116 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.723141 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.723159 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.826865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.827148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.827168 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.827192 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.827209 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.931037 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.931104 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.931128 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.931155 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.931176 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.009549 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.009649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.009674 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.009712 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.009745 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.030367 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.036435 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.036510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.036538 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.036572 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.036602 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.058353 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.064189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.064270 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.064290 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.064314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.064332 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.091122 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.096732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.096832 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.096851 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.096875 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.096893 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.111802 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.115183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.115207 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.115216 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.115230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.115251 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.128566 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.128833 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.130616 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.130663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.130675 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.130695 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.130710 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.233335 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.233399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.233416 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.233440 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.233457 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.337328 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.337401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.337420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.337445 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.337463 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.440020 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.440081 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.440099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.440123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.440142 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.543005 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.543062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.543079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.543102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.543124 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.645327 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.645404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.645421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.645443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.645461 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.689844 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:12:23.009900437 +0000 UTC Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.696313 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.696401 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.696508 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.696599 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.696793 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.697136 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.748641 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.748702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.748720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.748745 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.748765 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.850433 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.850464 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.850473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.850485 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.850494 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.953208 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.953271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.953280 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.953294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.953301 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.056362 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.056429 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.056452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.056483 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.056505 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.158824 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.158884 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.158901 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.158924 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.158941 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.262505 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.262575 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.262591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.262615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.262631 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.365677 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.365736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.365755 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.365782 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.365818 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.468019 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.468089 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.468106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.468129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.468152 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.571310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.571376 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.571398 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.571428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.571450 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.674742 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.674800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.674817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.674841 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.674857 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.690340 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:18:02.844426538 +0000 UTC Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.695726 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:35 crc kubenswrapper[4820]: E0221 06:48:35.695922 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.715293 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.733575 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.752867 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:25Z\\\",\\\"message\\\":\\\"empt(s)\\\\nI0221 06:48:25.634767 6918 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0221 06:48:25.634977 6918 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 06:48:25.634798 6918 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0221 06:48:25.635097 6918 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0221 06:48:25.635152 6918 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0221 06:48:25.635208 6918 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0221 06:48:25.634983 6918 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0221 06:48:25.635347 6918 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0221 06:48:25.635123 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:48:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.767940 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.777463 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.777523 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.777541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.777562 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.777578 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.784134 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.868149 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.868129227 podStartE2EDuration="1m6.868129227s" podCreationTimestamp="2026-02-21 06:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:35.867928601 +0000 UTC m=+90.901012809" watchObservedRunningTime="2026-02-21 06:48:35.868129227 +0000 UTC m=+90.901213435" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.868422 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podStartSLOduration=69.868415115 podStartE2EDuration="1m9.868415115s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:35.853032738 +0000 UTC m=+90.886116936" watchObservedRunningTime="2026-02-21 06:48:35.868415115 +0000 UTC m=+90.901499323" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.880327 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.880443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.880456 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.880471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.880483 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.936520 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tv4k8" podStartSLOduration=70.936499493 podStartE2EDuration="1m10.936499493s" podCreationTimestamp="2026-02-21 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:35.935728901 +0000 UTC m=+90.968813109" watchObservedRunningTime="2026-02-21 06:48:35.936499493 +0000 UTC m=+90.969583701" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.968785 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-t5qxz" podStartSLOduration=69.96876818 podStartE2EDuration="1m9.96876818s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:35.951213301 +0000 UTC m=+90.984297509" watchObservedRunningTime="2026-02-21 06:48:35.96876818 +0000 UTC m=+91.001852388" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.968881 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.968876953 podStartE2EDuration="1m9.968876953s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:35.96842808 +0000 UTC m=+91.001512318" watchObservedRunningTime="2026-02-21 06:48:35.968876953 +0000 UTC m=+91.001961161" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.983678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.983727 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.983739 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.983759 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.983772 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.983801 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=39.983787667 podStartE2EDuration="39.983787667s" podCreationTimestamp="2026-02-21 06:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:35.983325464 +0000 UTC m=+91.016409702" watchObservedRunningTime="2026-02-21 06:48:35.983787667 +0000 UTC m=+91.016871875" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.016031 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" podStartSLOduration=70.016009573 podStartE2EDuration="1m10.016009573s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:36.01593106 +0000 UTC m=+91.049015278" watchObservedRunningTime="2026-02-21 06:48:36.016009573 +0000 UTC m=+91.049093781" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.060704 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-94gxr" podStartSLOduration=70.06068293 podStartE2EDuration="1m10.06068293s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:36.051210515 +0000 UTC m=+91.084294723" watchObservedRunningTime="2026-02-21 06:48:36.06068293 +0000 UTC m=+91.093767138" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.085579 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.085622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.085631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.085645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.085654 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.189173 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.189275 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.189291 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.189314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.189330 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.291539 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.291584 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.291595 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.291613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.291625 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.394353 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.394428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.394447 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.394471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.394490 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.500953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.501325 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.501340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.501362 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.501376 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.605182 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.605279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.605300 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.605328 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.605347 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.691420 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:40:47.237192615 +0000 UTC Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.695917 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:36 crc kubenswrapper[4820]: E0221 06:48:36.696137 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.696292 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.696447 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:36 crc kubenswrapper[4820]: E0221 06:48:36.696520 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:36 crc kubenswrapper[4820]: E0221 06:48:36.696926 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.708809 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.708872 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.708894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.708919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.708937 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.812062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.812144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.812167 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.812196 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.812218 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.916523 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.916612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.916633 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.916662 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.916683 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.020017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.020118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.020150 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.020187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.020213 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.123510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.123592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.123613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.123639 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.123659 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.227592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.227670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.227691 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.227719 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.227739 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.332186 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.332287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.332308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.332336 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.332363 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.435345 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.435636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.435724 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.435809 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.435877 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.538193 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.538259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.538272 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.538290 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.538302 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.641391 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.641443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.641457 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.641475 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.641489 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.692416 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 09:20:10.724603197 +0000 UTC Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.695942 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:37 crc kubenswrapper[4820]: E0221 06:48:37.696306 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.744955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.745046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.745071 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.745099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.745119 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.847967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.848013 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.848029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.848056 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.848082 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.951961 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.952012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.952028 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.952050 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.952066 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.054682 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.054976 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.055139 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.055312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.055452 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.158174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.158225 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.158271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.158296 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.158313 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.261283 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.261350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.261385 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.261412 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.261429 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.364935 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.365005 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.365024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.365054 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.365077 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.468369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.468458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.468484 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.468516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.468540 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.572111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.572184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.572206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.572267 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.572296 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.675899 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.675954 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.675974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.676004 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.676028 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.692645 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:02:27.656622339 +0000 UTC Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.696054 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.696132 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.696063 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:38 crc kubenswrapper[4820]: E0221 06:48:38.696344 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:38 crc kubenswrapper[4820]: E0221 06:48:38.696483 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:38 crc kubenswrapper[4820]: E0221 06:48:38.696628 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.779385 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.779441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.779457 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.779480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.779499 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.881463 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.881499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.881508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.881521 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.881530 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.984985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.985065 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.985085 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.985113 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.985133 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.088101 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.088168 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.088193 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.088223 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.088282 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.190673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.190720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.190733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.190749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.190760 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.293717 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.293754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.293763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.293777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.293786 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.396653 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.396692 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.396701 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.396716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.396725 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.500134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.500172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.500180 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.500194 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.500203 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.603202 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.603281 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.603301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.603323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.603340 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.693802 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:06:28.565504675 +0000 UTC Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.696294 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:39 crc kubenswrapper[4820]: E0221 06:48:39.696492 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.697433 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:48:39 crc kubenswrapper[4820]: E0221 06:48:39.697681 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.704908 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.704958 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.704975 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.704998 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.705014 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.807387 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.807439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.807452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.807470 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.807482 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.910805 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.910854 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.910867 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.910885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.910902 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.014011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.014069 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.014090 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.014114 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.014134 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.117475 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.117522 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.117533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.117550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.117562 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.219361 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.219401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.219409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.219423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.219432 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.322083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.322158 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.322181 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.322205 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.322222 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.424764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.424822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.424839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.424860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.424874 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.526938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.527021 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.527059 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.527075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.527087 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.630016 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.630062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.630075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.630092 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.630103 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.694921 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:09:37.126095699 +0000 UTC Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.696103 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.696133 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.696156 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:40 crc kubenswrapper[4820]: E0221 06:48:40.696285 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:40 crc kubenswrapper[4820]: E0221 06:48:40.696394 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:40 crc kubenswrapper[4820]: E0221 06:48:40.696530 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.732184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.732217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.732225 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.732258 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.732267 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.852368 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.852453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.852478 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.852509 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.852532 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.955342 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.955632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.955720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.955816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.955910 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.059708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.060024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.060051 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.060082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.060105 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.165967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.166900 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.166946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.166972 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.166994 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.269598 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.269652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.269666 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.269686 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.269700 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.372041 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.372099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.372109 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.372124 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.372133 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.474199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.474228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.474254 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.474267 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.474276 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.576625 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.576676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.576687 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.576708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.576721 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.679110 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.679233 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.679278 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.679303 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.679320 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.696049 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:29:25.032962181 +0000 UTC Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.696098 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:41 crc kubenswrapper[4820]: E0221 06:48:41.696384 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.782153 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.782209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.782225 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.782284 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.782302 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.884967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.885064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.885087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.885114 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.885137 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.987970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.988027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.988046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.988070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.988087 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.090665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.090724 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.090736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.090786 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.090798 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.193470 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.193514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.193524 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.193540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.193551 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.296522 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.296584 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.296601 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.296623 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.296640 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.399323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.399382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.399399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.399423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.399442 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.502057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.502099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.502111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.502126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.502137 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.604784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.604830 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.604839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.604861 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.604872 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.695828 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.695903 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.695828 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:42 crc kubenswrapper[4820]: E0221 06:48:42.695989 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:42 crc kubenswrapper[4820]: E0221 06:48:42.696140 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.696265 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:34:18.725339204 +0000 UTC Feb 21 06:48:42 crc kubenswrapper[4820]: E0221 06:48:42.696334 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.707919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.707974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.707991 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.708012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.708030 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.810671 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.810754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.810777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.810806 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.810826 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.913192 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.913311 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.913334 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.913358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.913378 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.016708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.017099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.017328 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.017507 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.017662 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.119870 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.120222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.120436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.120596 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.120742 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.223287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.223567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.223643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.223712 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.223856 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.326995 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.327491 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.327738 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.327953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.328180 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.430715 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.430970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.431033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.431094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.431151 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.533696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.533746 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.533757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.533776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.533790 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.636662 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.636701 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.636711 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.636725 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.636733 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.696403 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:18:25.897209873 +0000 UTC Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.696481 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:43 crc kubenswrapper[4820]: E0221 06:48:43.696648 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.738746 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.738805 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.738821 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.738843 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.738862 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.841296 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.841346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.841356 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.841371 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.841379 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.944196 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.944276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.944292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.944312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.944332 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.046876 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.046922 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.046938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.046961 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.046978 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:44Z","lastTransitionTime":"2026-02-21T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.148978 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.149017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.149027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.149042 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.149053 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:44Z","lastTransitionTime":"2026-02-21T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.172731 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.172780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.172789 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.172803 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.172813 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:44Z","lastTransitionTime":"2026-02-21T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.224297 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz"] Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.224822 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.227278 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.227909 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.228112 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.230686 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.257669 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.25764439 podStartE2EDuration="30.25764439s" podCreationTimestamp="2026-02-21 06:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:44.256771326 +0000 UTC m=+99.289855554" watchObservedRunningTime="2026-02-21 06:48:44.25764439 +0000 UTC m=+99.290728628" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.325833 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2852f5c7-0618-4070-a98c-3e5f6bc98db0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.325974 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2852f5c7-0618-4070-a98c-3e5f6bc98db0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.326131 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2852f5c7-0618-4070-a98c-3e5f6bc98db0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.326176 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2852f5c7-0618-4070-a98c-3e5f6bc98db0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.326200 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2852f5c7-0618-4070-a98c-3e5f6bc98db0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.327331 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" podStartSLOduration=77.327318885 podStartE2EDuration="1m17.327318885s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:44.326430948 +0000 UTC m=+99.359515186" watchObservedRunningTime="2026-02-21 06:48:44.327318885 +0000 UTC m=+99.360403083" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.427639 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2852f5c7-0618-4070-a98c-3e5f6bc98db0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.427920 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2852f5c7-0618-4070-a98c-3e5f6bc98db0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.428006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2852f5c7-0618-4070-a98c-3e5f6bc98db0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.428109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2852f5c7-0618-4070-a98c-3e5f6bc98db0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.428219 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2852f5c7-0618-4070-a98c-3e5f6bc98db0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.427800 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2852f5c7-0618-4070-a98c-3e5f6bc98db0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.428143 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2852f5c7-0618-4070-a98c-3e5f6bc98db0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.429121 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2852f5c7-0618-4070-a98c-3e5f6bc98db0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.441815 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2852f5c7-0618-4070-a98c-3e5f6bc98db0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.445049 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2852f5c7-0618-4070-a98c-3e5f6bc98db0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.548558 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.696687 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.696733 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.696707 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:44:09.974763396 +0000 UTC Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.696819 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.696703 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:44 crc kubenswrapper[4820]: E0221 06:48:44.696855 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:44 crc kubenswrapper[4820]: E0221 06:48:44.697003 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:44 crc kubenswrapper[4820]: E0221 06:48:44.697090 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.704157 4820 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 21 06:48:45 crc kubenswrapper[4820]: I0221 06:48:45.196490 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" event={"ID":"2852f5c7-0618-4070-a98c-3e5f6bc98db0","Type":"ContainerStarted","Data":"46a0a5b88337d0bc4f6d9e2d704d310e4730925f8729358746eb8ca7bc193bca"} Feb 21 06:48:45 crc kubenswrapper[4820]: I0221 06:48:45.196824 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" event={"ID":"2852f5c7-0618-4070-a98c-3e5f6bc98db0","Type":"ContainerStarted","Data":"75e6cfb089709d4632a94173152d55fd4b9000f3e0a8900ac7a6200f851ca067"} Feb 21 06:48:45 crc kubenswrapper[4820]: I0221 06:48:45.208998 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" podStartSLOduration=79.208978996 podStartE2EDuration="1m19.208978996s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:45.208539003 +0000 UTC m=+100.241623201" watchObservedRunningTime="2026-02-21 06:48:45.208978996 +0000 UTC m=+100.242063194" Feb 21 06:48:45 crc kubenswrapper[4820]: I0221 06:48:45.541695 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:45 crc kubenswrapper[4820]: E0221 06:48:45.541822 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:48:45 crc kubenswrapper[4820]: E0221 06:48:45.541875 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:49.541857465 +0000 UTC m=+164.574941673 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:48:45 crc kubenswrapper[4820]: I0221 06:48:45.697613 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:45 crc kubenswrapper[4820]: E0221 06:48:45.697698 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:46 crc kubenswrapper[4820]: I0221 06:48:46.695779 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:46 crc kubenswrapper[4820]: I0221 06:48:46.695840 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:46 crc kubenswrapper[4820]: E0221 06:48:46.695911 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:46 crc kubenswrapper[4820]: E0221 06:48:46.696030 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:46 crc kubenswrapper[4820]: I0221 06:48:46.695850 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:46 crc kubenswrapper[4820]: E0221 06:48:46.696230 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:47 crc kubenswrapper[4820]: I0221 06:48:47.696875 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:47 crc kubenswrapper[4820]: E0221 06:48:47.697365 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:47 crc kubenswrapper[4820]: I0221 06:48:47.718842 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 21 06:48:48 crc kubenswrapper[4820]: I0221 06:48:48.695913 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:48 crc kubenswrapper[4820]: I0221 06:48:48.695998 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:48 crc kubenswrapper[4820]: E0221 06:48:48.696063 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:48 crc kubenswrapper[4820]: I0221 06:48:48.695998 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:48 crc kubenswrapper[4820]: E0221 06:48:48.696233 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:48 crc kubenswrapper[4820]: E0221 06:48:48.696432 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:49 crc kubenswrapper[4820]: I0221 06:48:49.695872 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:49 crc kubenswrapper[4820]: E0221 06:48:49.696379 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:50 crc kubenswrapper[4820]: I0221 06:48:50.696180 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:50 crc kubenswrapper[4820]: I0221 06:48:50.696356 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:50 crc kubenswrapper[4820]: I0221 06:48:50.696409 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:50 crc kubenswrapper[4820]: E0221 06:48:50.696677 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:50 crc kubenswrapper[4820]: E0221 06:48:50.696831 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:50 crc kubenswrapper[4820]: E0221 06:48:50.696975 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:51 crc kubenswrapper[4820]: I0221 06:48:51.696103 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:51 crc kubenswrapper[4820]: E0221 06:48:51.696368 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:52 crc kubenswrapper[4820]: I0221 06:48:52.696312 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:52 crc kubenswrapper[4820]: I0221 06:48:52.696629 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:52 crc kubenswrapper[4820]: I0221 06:48:52.696473 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:52 crc kubenswrapper[4820]: E0221 06:48:52.697087 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:52 crc kubenswrapper[4820]: E0221 06:48:52.697445 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:52 crc kubenswrapper[4820]: E0221 06:48:52.697664 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:53 crc kubenswrapper[4820]: I0221 06:48:53.696389 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:53 crc kubenswrapper[4820]: E0221 06:48:53.696830 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:54 crc kubenswrapper[4820]: I0221 06:48:54.695647 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:54 crc kubenswrapper[4820]: I0221 06:48:54.695770 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:54 crc kubenswrapper[4820]: E0221 06:48:54.695798 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:54 crc kubenswrapper[4820]: I0221 06:48:54.695867 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:54 crc kubenswrapper[4820]: E0221 06:48:54.695995 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:54 crc kubenswrapper[4820]: E0221 06:48:54.696114 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:54 crc kubenswrapper[4820]: I0221 06:48:54.697065 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:48:54 crc kubenswrapper[4820]: E0221 06:48:54.697328 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:48:55 crc kubenswrapper[4820]: I0221 06:48:55.696029 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:55 crc kubenswrapper[4820]: E0221 06:48:55.698570 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:55 crc kubenswrapper[4820]: I0221 06:48:55.741036 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.741000214 podStartE2EDuration="8.741000214s" podCreationTimestamp="2026-02-21 06:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:55.738999715 +0000 UTC m=+110.772083933" watchObservedRunningTime="2026-02-21 06:48:55.741000214 +0000 UTC m=+110.774084452" Feb 21 06:48:56 crc kubenswrapper[4820]: I0221 06:48:56.696697 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:56 crc kubenswrapper[4820]: I0221 06:48:56.696798 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:56 crc kubenswrapper[4820]: I0221 06:48:56.696842 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:56 crc kubenswrapper[4820]: E0221 06:48:56.697004 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:56 crc kubenswrapper[4820]: E0221 06:48:56.697126 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:56 crc kubenswrapper[4820]: E0221 06:48:56.697274 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:57 crc kubenswrapper[4820]: I0221 06:48:57.696483 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:57 crc kubenswrapper[4820]: E0221 06:48:57.696648 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:58 crc kubenswrapper[4820]: I0221 06:48:58.696435 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:58 crc kubenswrapper[4820]: I0221 06:48:58.696472 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:58 crc kubenswrapper[4820]: E0221 06:48:58.696546 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:58 crc kubenswrapper[4820]: I0221 06:48:58.696596 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:58 crc kubenswrapper[4820]: E0221 06:48:58.696733 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:58 crc kubenswrapper[4820]: E0221 06:48:58.696768 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:59 crc kubenswrapper[4820]: I0221 06:48:59.696326 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:59 crc kubenswrapper[4820]: E0221 06:48:59.696850 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:00 crc kubenswrapper[4820]: I0221 06:49:00.696023 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:00 crc kubenswrapper[4820]: I0221 06:49:00.695999 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:00 crc kubenswrapper[4820]: I0221 06:49:00.696106 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:00 crc kubenswrapper[4820]: E0221 06:49:00.696178 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:00 crc kubenswrapper[4820]: E0221 06:49:00.696299 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:00 crc kubenswrapper[4820]: E0221 06:49:00.696791 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:01 crc kubenswrapper[4820]: I0221 06:49:01.300471 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/1.log" Feb 21 06:49:01 crc kubenswrapper[4820]: I0221 06:49:01.301129 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/0.log" Feb 21 06:49:01 crc kubenswrapper[4820]: I0221 06:49:01.301215 4820 generic.go:334] "Generic (PLEG): container finished" podID="abdb469c-ba72-4790-9ce3-785f4facbcb9" containerID="e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf" exitCode=1 Feb 21 06:49:01 crc kubenswrapper[4820]: I0221 06:49:01.301300 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerDied","Data":"e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf"} Feb 21 06:49:01 crc kubenswrapper[4820]: I0221 06:49:01.301369 4820 scope.go:117] "RemoveContainer" containerID="27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a" Feb 21 06:49:01 crc kubenswrapper[4820]: I0221 06:49:01.301897 4820 scope.go:117] "RemoveContainer" containerID="e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf" Feb 21 06:49:01 crc kubenswrapper[4820]: E0221 06:49:01.302140 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-94gxr_openshift-multus(abdb469c-ba72-4790-9ce3-785f4facbcb9)\"" pod="openshift-multus/multus-94gxr" podUID="abdb469c-ba72-4790-9ce3-785f4facbcb9" Feb 21 06:49:01 crc kubenswrapper[4820]: I0221 06:49:01.696553 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:01 crc kubenswrapper[4820]: E0221 06:49:01.696699 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:02 crc kubenswrapper[4820]: I0221 06:49:02.305891 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/1.log" Feb 21 06:49:02 crc kubenswrapper[4820]: I0221 06:49:02.695710 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:02 crc kubenswrapper[4820]: I0221 06:49:02.695815 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:02 crc kubenswrapper[4820]: I0221 06:49:02.695740 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:02 crc kubenswrapper[4820]: E0221 06:49:02.695908 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:02 crc kubenswrapper[4820]: E0221 06:49:02.696114 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:02 crc kubenswrapper[4820]: E0221 06:49:02.696215 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:03 crc kubenswrapper[4820]: I0221 06:49:03.696368 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:03 crc kubenswrapper[4820]: E0221 06:49:03.696673 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:04 crc kubenswrapper[4820]: I0221 06:49:04.695886 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:04 crc kubenswrapper[4820]: E0221 06:49:04.696025 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:04 crc kubenswrapper[4820]: I0221 06:49:04.695896 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:04 crc kubenswrapper[4820]: I0221 06:49:04.695896 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:04 crc kubenswrapper[4820]: E0221 06:49:04.696115 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:04 crc kubenswrapper[4820]: E0221 06:49:04.696392 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:05 crc kubenswrapper[4820]: E0221 06:49:05.669254 4820 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 21 06:49:05 crc kubenswrapper[4820]: I0221 06:49:05.696651 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:05 crc kubenswrapper[4820]: E0221 06:49:05.697451 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:05 crc kubenswrapper[4820]: E0221 06:49:05.838614 4820 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 21 06:49:06 crc kubenswrapper[4820]: I0221 06:49:06.695989 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:06 crc kubenswrapper[4820]: I0221 06:49:06.696109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:06 crc kubenswrapper[4820]: I0221 06:49:06.697083 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:06 crc kubenswrapper[4820]: E0221 06:49:06.697201 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:06 crc kubenswrapper[4820]: E0221 06:49:06.697407 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:06 crc kubenswrapper[4820]: E0221 06:49:06.697572 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:07 crc kubenswrapper[4820]: I0221 06:49:07.696547 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:07 crc kubenswrapper[4820]: E0221 06:49:07.696731 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:08 crc kubenswrapper[4820]: I0221 06:49:08.696315 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:08 crc kubenswrapper[4820]: I0221 06:49:08.696323 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:08 crc kubenswrapper[4820]: I0221 06:49:08.696315 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:08 crc kubenswrapper[4820]: E0221 06:49:08.696498 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:08 crc kubenswrapper[4820]: E0221 06:49:08.696642 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:08 crc kubenswrapper[4820]: E0221 06:49:08.696976 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:09 crc kubenswrapper[4820]: I0221 06:49:09.695840 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:09 crc kubenswrapper[4820]: E0221 06:49:09.696053 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:09 crc kubenswrapper[4820]: I0221 06:49:09.696796 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.336019 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/3.log" Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.338451 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.338819 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.362874 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podStartSLOduration=103.362857057 podStartE2EDuration="1m43.362857057s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:10.362390773 +0000 UTC m=+125.395474981" watchObservedRunningTime="2026-02-21 06:49:10.362857057 +0000 UTC m=+125.395941255" Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.631965 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bt6wj"] Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.632140 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:10 crc kubenswrapper[4820]: E0221 06:49:10.632367 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.696656 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.696707 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:10 crc kubenswrapper[4820]: E0221 06:49:10.696778 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:10 crc kubenswrapper[4820]: E0221 06:49:10.696839 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:10 crc kubenswrapper[4820]: E0221 06:49:10.840216 4820 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 21 06:49:11 crc kubenswrapper[4820]: I0221 06:49:11.696636 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:11 crc kubenswrapper[4820]: E0221 06:49:11.696766 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:11 crc kubenswrapper[4820]: I0221 06:49:11.696842 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:11 crc kubenswrapper[4820]: E0221 06:49:11.697063 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:12 crc kubenswrapper[4820]: I0221 06:49:12.696559 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:12 crc kubenswrapper[4820]: I0221 06:49:12.696586 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:12 crc kubenswrapper[4820]: E0221 06:49:12.696673 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:12 crc kubenswrapper[4820]: E0221 06:49:12.696789 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:13 crc kubenswrapper[4820]: I0221 06:49:13.696406 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:13 crc kubenswrapper[4820]: I0221 06:49:13.696542 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:13 crc kubenswrapper[4820]: E0221 06:49:13.696682 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:13 crc kubenswrapper[4820]: I0221 06:49:13.696808 4820 scope.go:117] "RemoveContainer" containerID="e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf" Feb 21 06:49:13 crc kubenswrapper[4820]: E0221 06:49:13.696808 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:14 crc kubenswrapper[4820]: I0221 06:49:14.351877 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/1.log" Feb 21 06:49:14 crc kubenswrapper[4820]: I0221 06:49:14.351925 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerStarted","Data":"03d0a6e2d37266d0266ccb9f72a6efebcd4bdac32c4b5bd8e9b6a73ba841b1e2"} Feb 21 06:49:14 crc kubenswrapper[4820]: I0221 06:49:14.623923 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:49:14 crc kubenswrapper[4820]: I0221 06:49:14.695997 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:14 crc kubenswrapper[4820]: I0221 06:49:14.696030 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:14 crc kubenswrapper[4820]: E0221 06:49:14.696462 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:14 crc kubenswrapper[4820]: E0221 06:49:14.696549 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:15 crc kubenswrapper[4820]: I0221 06:49:15.695846 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:15 crc kubenswrapper[4820]: I0221 06:49:15.695901 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:15 crc kubenswrapper[4820]: E0221 06:49:15.698037 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:15 crc kubenswrapper[4820]: E0221 06:49:15.698259 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:16 crc kubenswrapper[4820]: I0221 06:49:16.696318 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:16 crc kubenswrapper[4820]: I0221 06:49:16.696317 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:16 crc kubenswrapper[4820]: I0221 06:49:16.699088 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 21 06:49:16 crc kubenswrapper[4820]: I0221 06:49:16.699629 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 21 06:49:17 crc kubenswrapper[4820]: I0221 06:49:17.696264 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:17 crc kubenswrapper[4820]: I0221 06:49:17.696378 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:17 crc kubenswrapper[4820]: I0221 06:49:17.699909 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 21 06:49:17 crc kubenswrapper[4820]: I0221 06:49:17.700043 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 21 06:49:17 crc kubenswrapper[4820]: I0221 06:49:17.700125 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 21 06:49:17 crc kubenswrapper[4820]: I0221 06:49:17.700420 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.249575 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.297429 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.298008 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.298627 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zz4sx"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.299316 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.299752 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-97n76"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.300392 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.300738 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhsbz"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.301142 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.301563 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.301877 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.302013 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.302384 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.303435 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nnhcf"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.304107 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.304350 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.304650 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.307169 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.308140 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.308365 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.308512 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.308764 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.308816 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.308903 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.309185 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.310021 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.310083 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.312730 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.315130 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4dt74"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.316081 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.316397 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.316898 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.317151 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-cgbzf"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.317728 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319323 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319493 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319571 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319673 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319800 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319979 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319987 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.320034 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319983 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.320289 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.320330 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.320292 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.320177 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.321899 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.322712 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.323690 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.323710 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.323966 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.324174 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.325483 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f6j4c"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.326161 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.326725 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.326954 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.339782 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.340671 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.341552 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.342782 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.343097 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.343520 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.343817 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.344030 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.344402 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.344891 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.345101 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.345579 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.352444 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.357009 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.357116 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.357040 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.357436 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.357580 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.357908 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.358489 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4kcq6"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.358681 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.359876 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.360263 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.360829 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.361098 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.361182 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.361630 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.362103 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.362813 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.363611 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.363672 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.364966 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.366146 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.366568 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.374635 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.374644 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.374907 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.375762 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.376897 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.378051 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.378140 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.378220 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.378819 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.378925 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.378816 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.379128 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.379496 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.379801 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.380118 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.380389 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.380568 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-566bt"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.381052 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.380586 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.385649 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.385850 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.387010 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.387088 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.390059 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.391345 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cgbv7"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.391853 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.392993 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393500 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-audit-policies\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393537 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-encryption-config\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393568 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-config\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393608 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bec4e07b-2745-4a45-8717-3ee01f99919e-serving-cert\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393653 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-client-ca\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393672 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-client-ca\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393693 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-serving-cert\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393714 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-config\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393733 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-auth-proxy-config\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393753 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84hkx\" (UniqueName: \"kubernetes.io/projected/a584a459-0672-47ef-bb32-c79f31790f91-kube-api-access-84hkx\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393777 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgjhn\" (UniqueName: \"kubernetes.io/projected/b6775f10-01f3-4263-8441-ec5be6baf5c3-kube-api-access-lgjhn\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393797 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfqm5\" (UniqueName: \"kubernetes.io/projected/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-kube-api-access-cfqm5\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393826 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a584a459-0672-47ef-bb32-c79f31790f91-serving-cert\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393846 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393863 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-etcd-client\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393882 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393901 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxxhr\" (UniqueName: \"kubernetes.io/projected/bec4e07b-2745-4a45-8717-3ee01f99919e-kube-api-access-qxxhr\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393930 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8add43c0-9280-4e92-b4fe-4628eb645e56-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393963 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6775f10-01f3-4263-8441-ec5be6baf5c3-audit-dir\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393981 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8add43c0-9280-4e92-b4fe-4628eb645e56-images\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.394003 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-machine-approver-tls\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.394021 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-config\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.394038 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8add43c0-9280-4e92-b4fe-4628eb645e56-config\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.394063 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.394086 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6jbs\" (UniqueName: \"kubernetes.io/projected/8add43c0-9280-4e92-b4fe-4628eb645e56-kube-api-access-c6jbs\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.399342 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.400427 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.401978 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.402441 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.404135 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.404726 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.405078 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.405274 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.405529 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.405840 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.406180 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.406344 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.406403 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.406428 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.406528 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.406412 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.407744 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.422057 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.422300 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.422725 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.423390 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.423824 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.426535 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-q9pg5"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.428913 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.433796 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.434463 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.441639 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.447303 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.447423 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.450077 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lt58x"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.450632 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bm22t"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.451056 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.450943 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.451851 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.452327 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.459122 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.462371 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.462811 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.463185 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-kxrb8"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.463683 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.464458 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.464908 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.485798 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.486305 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-kxrb8" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.486321 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.487503 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.486408 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.487975 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.491988 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.492442 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.492602 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.492456 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.493257 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.493493 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.493542 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.493593 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.494145 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.494168 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.494491 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.494563 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495082 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495103 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6jbs\" (UniqueName: \"kubernetes.io/projected/8add43c0-9280-4e92-b4fe-4628eb645e56-kube-api-access-c6jbs\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495121 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/86913b03-f631-4bfa-8533-c43326d364ff-metrics-tls\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495141 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-audit-policies\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495156 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-encryption-config\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495172 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-config\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495181 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fm6pk"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495190 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e921dcf-57ab-41e2-9994-fb602aeec37f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495260 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bec4e07b-2745-4a45-8717-3ee01f99919e-serving-cert\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495279 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-config\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495294 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e921dcf-57ab-41e2-9994-fb602aeec37f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495322 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86913b03-f631-4bfa-8533-c43326d364ff-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495341 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-client-ca\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495355 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-client-ca\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495375 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-serving-cert\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495390 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86913b03-f631-4bfa-8533-c43326d364ff-trusted-ca\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495409 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-config\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495425 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-auth-proxy-config\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495439 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84hkx\" (UniqueName: \"kubernetes.io/projected/a584a459-0672-47ef-bb32-c79f31790f91-kube-api-access-84hkx\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495457 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgjhn\" (UniqueName: \"kubernetes.io/projected/b6775f10-01f3-4263-8441-ec5be6baf5c3-kube-api-access-lgjhn\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495474 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfqm5\" (UniqueName: \"kubernetes.io/projected/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-kube-api-access-cfqm5\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495490 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a584a459-0672-47ef-bb32-c79f31790f91-serving-cert\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495507 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-client\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495522 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495537 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwd4p\" (UniqueName: \"kubernetes.io/projected/6e921dcf-57ab-41e2-9994-fb602aeec37f-kube-api-access-fwd4p\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495552 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-etcd-client\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495569 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495586 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxxhr\" (UniqueName: \"kubernetes.io/projected/bec4e07b-2745-4a45-8717-3ee01f99919e-kube-api-access-qxxhr\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495611 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8add43c0-9280-4e92-b4fe-4628eb645e56-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6775f10-01f3-4263-8441-ec5be6baf5c3-audit-dir\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495649 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8add43c0-9280-4e92-b4fe-4628eb645e56-images\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495665 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77141b4f-e31f-4e63-a5cb-329ea918a5ed-serving-cert\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495680 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-service-ca\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495697 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-987r7\" (UniqueName: \"kubernetes.io/projected/77141b4f-e31f-4e63-a5cb-329ea918a5ed-kube-api-access-987r7\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495715 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-machine-approver-tls\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495731 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-config\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495750 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8add43c0-9280-4e92-b4fe-4628eb645e56-config\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-ca\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495797 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbf4j\" (UniqueName: \"kubernetes.io/projected/86913b03-f631-4bfa-8533-c43326d364ff-kube-api-access-nbf4j\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e921dcf-57ab-41e2-9994-fb602aeec37f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495872 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.496442 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.496462 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-client-ca\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.496527 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k58x6"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495747 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.496980 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-config\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.497285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-audit-policies\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.497471 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-config\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.498002 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-client-ca\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.496524 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6775f10-01f3-4263-8441-ec5be6baf5c3-audit-dir\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.501012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-auth-proxy-config\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.501191 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-config\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.501271 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8add43c0-9280-4e92-b4fe-4628eb645e56-config\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.502039 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.502569 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tgf94"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.502693 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.503142 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.503520 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-serving-cert\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.504347 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sps4j"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.505020 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-etcd-client\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.505092 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8add43c0-9280-4e92-b4fe-4628eb645e56-images\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.505382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-encryption-config\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.505403 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.505384 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4dt74"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.506384 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a584a459-0672-47ef-bb32-c79f31790f91-serving-cert\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.506476 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.507738 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zz4sx"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.508858 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nnhcf"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.510506 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.511644 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.512966 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bm22t"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.513177 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.517945 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.518670 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bec4e07b-2745-4a45-8717-3ee01f99919e-serving-cert\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.518385 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.521336 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8add43c0-9280-4e92-b4fe-4628eb645e56-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.521625 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.522228 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-machine-approver-tls\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.527303 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lt58x"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.527378 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cgbzf"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.529725 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.530787 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.534655 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f6j4c"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.534730 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cgbv7"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.536438 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4kcq6"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.538713 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.539220 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.539754 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.541085 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.541929 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.543023 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.544450 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.545427 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-kxrb8"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.546604 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.547828 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.549506 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.550854 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhsbz"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.552750 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-m7pv7"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.553517 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.554286 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.554476 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.555300 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-566bt"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.556531 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qhnw8"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.557693 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.557864 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.559100 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k58x6"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.559505 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.561734 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.562838 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tgf94"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.564180 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m7pv7"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.565367 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qhnw8"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.566521 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fm6pk"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.567732 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sps4j"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.568743 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-z7jtv"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.569327 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.580230 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.596878 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-ca\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.596923 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbf4j\" (UniqueName: \"kubernetes.io/projected/86913b03-f631-4bfa-8533-c43326d364ff-kube-api-access-nbf4j\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.596952 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e921dcf-57ab-41e2-9994-fb602aeec37f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.596990 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/86913b03-f631-4bfa-8533-c43326d364ff-metrics-tls\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.597020 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e921dcf-57ab-41e2-9994-fb602aeec37f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.597093 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-config\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.597751 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-ca\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.597935 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-config\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.598482 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e921dcf-57ab-41e2-9994-fb602aeec37f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.599043 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86913b03-f631-4bfa-8533-c43326d364ff-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.599346 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86913b03-f631-4bfa-8533-c43326d364ff-trusted-ca\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.599632 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e921dcf-57ab-41e2-9994-fb602aeec37f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.599654 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.599975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-client\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.601217 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwd4p\" (UniqueName: \"kubernetes.io/projected/6e921dcf-57ab-41e2-9994-fb602aeec37f-kube-api-access-fwd4p\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.601129 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e921dcf-57ab-41e2-9994-fb602aeec37f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.601320 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77141b4f-e31f-4e63-a5cb-329ea918a5ed-serving-cert\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.601453 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-service-ca\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.601482 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-987r7\" (UniqueName: \"kubernetes.io/projected/77141b4f-e31f-4e63-a5cb-329ea918a5ed-kube-api-access-987r7\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.601992 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-service-ca\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.603344 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-client\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.604180 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77141b4f-e31f-4e63-a5cb-329ea918a5ed-serving-cert\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.619620 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.639420 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.660332 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.679501 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.691619 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/86913b03-f631-4bfa-8533-c43326d364ff-metrics-tls\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.709777 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.711249 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86913b03-f631-4bfa-8533-c43326d364ff-trusted-ca\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.719626 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.740472 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.780103 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.799785 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.820012 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.839795 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.859720 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.879805 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.899656 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.919855 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.940057 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.959920 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.980060 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.000231 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.019839 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.040394 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.060193 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.080259 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.099860 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.119894 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.140972 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.159832 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.180097 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.242744 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.242757 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.242841 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.259828 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.280929 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.299686 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.319389 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.340407 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.359835 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.381004 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.419462 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.440643 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.460562 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.480540 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.499232 4820 request.go:700] Waited for 1.006703325s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&limit=500&resourceVersion=0 Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.500525 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.520043 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.539780 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.560211 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.580099 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.599711 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.620260 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.640430 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.660615 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.680333 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.700558 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.720268 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.740492 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.760810 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.779880 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.800158 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.834170 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6jbs\" (UniqueName: \"kubernetes.io/projected/8add43c0-9280-4e92-b4fe-4628eb645e56-kube-api-access-c6jbs\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.841431 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.860435 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.880677 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.887990 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.899845 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.920538 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.953712 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxxhr\" (UniqueName: \"kubernetes.io/projected/bec4e07b-2745-4a45-8717-3ee01f99919e-kube-api-access-qxxhr\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.975470 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgjhn\" (UniqueName: \"kubernetes.io/projected/b6775f10-01f3-4263-8441-ec5be6baf5c3-kube-api-access-lgjhn\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.997332 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfqm5\" (UniqueName: \"kubernetes.io/projected/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-kube-api-access-cfqm5\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.012810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84hkx\" (UniqueName: \"kubernetes.io/projected/a584a459-0672-47ef-bb32-c79f31790f91-kube-api-access-84hkx\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.019921 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.039820 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.040108 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zz4sx"] Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.060676 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.085522 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.100147 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.120175 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.134375 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.140033 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.161298 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.180881 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.200642 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.204188 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:27 crc kubenswrapper[4820]: W0221 06:49:27.217326 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb92ec3e3_a4a6_4b99_9a3c_d1b97369ab52.slice/crio-9e7fafa3c4359250136e12b475ed965f7c6148a195bcab11c99ebf50b0fc1bcb WatchSource:0}: Error finding container 9e7fafa3c4359250136e12b475ed965f7c6148a195bcab11c99ebf50b0fc1bcb: Status 404 returned error can't find the container with id 9e7fafa3c4359250136e12b475ed965f7c6148a195bcab11c99ebf50b0fc1bcb Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.221899 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.231138 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.240800 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.253293 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.260957 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.280320 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.302077 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.320836 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.324122 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x"] Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.342548 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: W0221 06:49:27.354278 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda584a459_0672_47ef_bb32_c79f31790f91.slice/crio-9f4896a106314bc994acfd7faee81b0d6630a37fbb60ec630db8d04e58c2928f WatchSource:0}: Error finding container 9f4896a106314bc994acfd7faee81b0d6630a37fbb60ec630db8d04e58c2928f: Status 404 returned error can't find the container with id 9f4896a106314bc994acfd7faee81b0d6630a37fbb60ec630db8d04e58c2928f Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.360202 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.381251 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.393468 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhsbz"] Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.400533 4820 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.416396 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" event={"ID":"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52","Type":"ContainerStarted","Data":"9e7fafa3c4359250136e12b475ed965f7c6148a195bcab11c99ebf50b0fc1bcb"} Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.417636 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" event={"ID":"a584a459-0672-47ef-bb32-c79f31790f91","Type":"ContainerStarted","Data":"9f4896a106314bc994acfd7faee81b0d6630a37fbb60ec630db8d04e58c2928f"} Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.418708 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" event={"ID":"bec4e07b-2745-4a45-8717-3ee01f99919e","Type":"ContainerStarted","Data":"4d78f1e45a0c6a4cb8ba55254cd92ac8d35c6e02d5bd767c1be192646a5e40fd"} Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.419558 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.420057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" event={"ID":"8add43c0-9280-4e92-b4fe-4628eb645e56","Type":"ContainerStarted","Data":"690e697871c2ef33d8bb9bf3c685a48886157fe3db6ea51bd61104436932f421"} Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.420087 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" event={"ID":"8add43c0-9280-4e92-b4fe-4628eb645e56","Type":"ContainerStarted","Data":"551c77a07bc56850fd3be70039a389a75dd8f94222cc9946cc798296a3fb147a"} Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.420101 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" event={"ID":"8add43c0-9280-4e92-b4fe-4628eb645e56","Type":"ContainerStarted","Data":"b1908d8bf2cfb08c9868b23cd01d73eb9ff5b4ae3d82621bd62397583a9215c6"} Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.431306 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6"] Feb 21 06:49:27 crc kubenswrapper[4820]: W0221 06:49:27.436042 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6775f10_01f3_4263_8441_ec5be6baf5c3.slice/crio-e4811bc3cd2f451b0ef29d261813103460d388e5a0a61cb580f2cb7e92dcfcab WatchSource:0}: Error finding container e4811bc3cd2f451b0ef29d261813103460d388e5a0a61cb580f2cb7e92dcfcab: Status 404 returned error can't find the container with id e4811bc3cd2f451b0ef29d261813103460d388e5a0a61cb580f2cb7e92dcfcab Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.439647 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.461041 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.499501 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbf4j\" (UniqueName: \"kubernetes.io/projected/86913b03-f631-4bfa-8533-c43326d364ff-kube-api-access-nbf4j\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.512546 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e921dcf-57ab-41e2-9994-fb602aeec37f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.519218 4820 request.go:700] Waited for 1.919970283s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.535901 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86913b03-f631-4bfa-8533-c43326d364ff-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.553715 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwd4p\" (UniqueName: \"kubernetes.io/projected/6e921dcf-57ab-41e2-9994-fb602aeec37f-kube-api-access-fwd4p\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.575937 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-987r7\" (UniqueName: \"kubernetes.io/projected/77141b4f-e31f-4e63-a5cb-329ea918a5ed-kube-api-access-987r7\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655019 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2kl5\" (UniqueName: \"kubernetes.io/projected/aee60016-61c2-4f4d-b181-59c1def12eef-kube-api-access-l2kl5\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655068 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22582d21-813c-49a4-aa49-e4a7d3f0f638-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655120 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-ca-trust-extracted\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655148 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-tls\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655169 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655190 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655232 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jlhnm\" (UID: \"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655279 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-config\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655316 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-etcd-client\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655341 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-service-ca-bundle\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655374 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh87r\" (UniqueName: \"kubernetes.io/projected/18b46a58-b11c-4760-bd38-1c875c4ecf21-kube-api-access-xh87r\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655395 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655456 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcj4k\" (UniqueName: \"kubernetes.io/projected/7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3-kube-api-access-vcj4k\") pod \"cluster-samples-operator-665b6dd947-jlhnm\" (UID: \"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655479 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-serving-cert\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655501 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22582d21-813c-49a4-aa49-e4a7d3f0f638-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655543 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-installation-pull-secrets\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655566 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-serving-cert\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655604 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p5vg\" (UniqueName: \"kubernetes.io/projected/22582d21-813c-49a4-aa49-e4a7d3f0f638-kube-api-access-4p5vg\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655636 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655658 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-audit-policies\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655681 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655716 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2b27a90-ce04-40f3-9656-148cca792c55-audit-dir\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655751 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-encryption-config\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655774 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aee60016-61c2-4f4d-b181-59c1def12eef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655797 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655824 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655896 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvm7b\" (UniqueName: \"kubernetes.io/projected/35f83dc0-1687-4716-b61f-e7bbb921d1c2-kube-api-access-lvm7b\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655920 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-trusted-ca\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655941 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-bound-sa-token\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655987 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35f83dc0-1687-4716-b61f-e7bbb921d1c2-serving-cert\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656035 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-config\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656053 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/228a9802-8837-425d-ab0f-72c79dbc4399-audit-dir\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656074 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqzfz\" (UniqueName: \"kubernetes.io/projected/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-kube-api-access-wqzfz\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656095 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-service-ca\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656116 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2sw9\" (UniqueName: \"kubernetes.io/projected/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-kube-api-access-d2sw9\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656137 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656159 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-certificates\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656195 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656262 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656323 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-audit\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656348 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-oauth-config\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656369 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-oauth-serving-cert\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656392 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656412 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f83dc0-1687-4716-b61f-e7bbb921d1c2-config\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-config\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656505 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j82v\" (UniqueName: \"kubernetes.io/projected/a2b27a90-ce04-40f3-9656-148cca792c55-kube-api-access-4j82v\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656542 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656565 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/228a9802-8837-425d-ab0f-72c79dbc4399-node-pullsecrets\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656585 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-etcd-serving-ca\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656606 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krddv\" (UniqueName: \"kubernetes.io/projected/228a9802-8837-425d-ab0f-72c79dbc4399-kube-api-access-krddv\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656638 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-serving-cert\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656659 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-image-import-ca\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656680 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656725 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656750 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6nlg\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-kube-api-access-g6nlg\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656774 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-serving-cert\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656794 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35f83dc0-1687-4716-b61f-e7bbb921d1c2-trusted-ca\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656856 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-trusted-ca-bundle\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656917 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee60016-61c2-4f4d-b181-59c1def12eef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: E0221 06:49:27.662121 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.162109391 +0000 UTC m=+143.195193589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.664653 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.692469 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.698941 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.763848 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764026 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-image-import-ca\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764057 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764106 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fc95478e-4574-4010-8833-5da4ec1987b3-signing-key\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764135 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzz4\" (UniqueName: \"kubernetes.io/projected/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-kube-api-access-ttzz4\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: E0221 06:49:27.764183 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.26415039 +0000 UTC m=+143.297234588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764224 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-default-certificate\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764560 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764612 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-serving-cert\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764640 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764656 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34dd983a-2ee5-48ad-8858-59e9c0cbf483-tmpfs\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764670 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-stats-auth\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764687 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/754cb6b5-90c5-4747-8ef0-28a7c6b02448-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bm22t\" (UID: \"754cb6b5-90c5-4747-8ef0-28a7c6b02448\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764705 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/177c9eb7-021d-4d7f-a044-8913469b4236-config-volume\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764776 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0951903b-474b-4279-b6ad-ab8920fd2d5b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764817 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e86cdb-22d7-424c-a51e-61c1d7848655-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764882 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee60016-61c2-4f4d-b181-59c1def12eef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: E0221 06:49:27.764998 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.264987176 +0000 UTC m=+143.298071434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765060 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2kl5\" (UniqueName: \"kubernetes.io/projected/aee60016-61c2-4f4d-b181-59c1def12eef-kube-api-access-l2kl5\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765092 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fc95478e-4574-4010-8833-5da4ec1987b3-signing-cabundle\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765130 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9185d26f-44b3-45e3-9417-11148a03a52d-config\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765157 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-proxy-tls\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22582d21-813c-49a4-aa49-e4a7d3f0f638-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765215 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-tls\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765418 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765682 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-srv-cert\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765731 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-image-import-ca\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765780 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee60016-61c2-4f4d-b181-59c1def12eef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765807 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hcxd\" (UniqueName: \"kubernetes.io/projected/8b5270e1-81d3-477a-96f9-b2cbc3090288-kube-api-access-4hcxd\") pod \"downloads-7954f5f757-kxrb8\" (UID: \"8b5270e1-81d3-477a-96f9-b2cbc3090288\") " pod="openshift-console/downloads-7954f5f757-kxrb8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765836 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-service-ca-bundle\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765880 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-proxy-tls\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766000 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3d20e62-3892-4e70-adad-754ac75dd1b9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766068 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8462a28b-a255-4ec7-9e85-cb98c6666e68-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9w9rw\" (UID: \"8462a28b-a255-4ec7-9e85-cb98c6666e68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766097 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh87r\" (UniqueName: \"kubernetes.io/projected/18b46a58-b11c-4760-bd38-1c875c4ecf21-kube-api-access-xh87r\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766140 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766163 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22582d21-813c-49a4-aa49-e4a7d3f0f638-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766188 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-metrics-certs\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766262 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-images\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766296 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-installation-pull-secrets\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766324 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766377 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7e86cdb-22d7-424c-a51e-61c1d7848655-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766407 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766414 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-service-ca-bundle\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766578 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k5ns\" (UniqueName: \"kubernetes.io/projected/177c9eb7-021d-4d7f-a044-8913469b4236-kube-api-access-7k5ns\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766626 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-encryption-config\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766677 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aee60016-61c2-4f4d-b181-59c1def12eef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766698 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9185d26f-44b3-45e3-9417-11148a03a52d-serving-cert\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.767097 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.767118 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22582d21-813c-49a4-aa49-e4a7d3f0f638-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.767560 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.767621 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.767663 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-node-bootstrap-token\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768289 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0951903b-474b-4279-b6ad-ab8920fd2d5b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768331 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnzbj\" (UniqueName: \"kubernetes.io/projected/fc95478e-4574-4010-8833-5da4ec1987b3-kube-api-access-lnzbj\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768371 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-trusted-ca\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35f83dc0-1687-4716-b61f-e7bbb921d1c2-serving-cert\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768659 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc6a5b86-a925-4f00-b0ed-19717e7e1f09-metrics-tls\") pod \"dns-operator-744455d44c-lt58x\" (UID: \"fc6a5b86-a925-4f00-b0ed-19717e7e1f09\") " pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768694 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-config\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768714 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/228a9802-8837-425d-ab0f-72c79dbc4399-audit-dir\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768749 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-service-ca\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768767 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768788 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/228a9802-8837-425d-ab0f-72c79dbc4399-audit-dir\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768935 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.769557 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-service-ca\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.769793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-trusted-ca\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.769935 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-serving-cert\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.769986 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-certificates\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.770020 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.770449 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-config\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.770732 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-installation-pull-secrets\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.770912 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-oauth-config\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.771180 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-certificates\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.771413 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772186 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-oauth-serving-cert\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.770960 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-oauth-serving-cert\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772501 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772537 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/128520ce-9a27-454a-8394-efae24e83a7c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772569 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128520ce-9a27-454a-8394-efae24e83a7c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772593 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0951903b-474b-4279-b6ad-ab8920fd2d5b-config\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772622 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klv6p\" (UniqueName: \"kubernetes.io/projected/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-kube-api-access-klv6p\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772649 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-csi-data-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772675 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fl94\" (UniqueName: \"kubernetes.io/projected/128520ce-9a27-454a-8394-efae24e83a7c-kube-api-access-9fl94\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772702 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhs7f\" (UniqueName: \"kubernetes.io/projected/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-kube-api-access-xhs7f\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772745 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j82v\" (UniqueName: \"kubernetes.io/projected/a2b27a90-ce04-40f3-9656-148cca792c55-kube-api-access-4j82v\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1df285f9-7ae4-4fea-8817-0a7e5e851551-cert\") pod \"ingress-canary-m7pv7\" (UID: \"1df285f9-7ae4-4fea-8817-0a7e5e851551\") " pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772798 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rtj9\" (UniqueName: \"kubernetes.io/projected/754cb6b5-90c5-4747-8ef0-28a7c6b02448-kube-api-access-6rtj9\") pod \"multus-admission-controller-857f4d67dd-bm22t\" (UID: \"754cb6b5-90c5-4747-8ef0-28a7c6b02448\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772828 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772855 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/228a9802-8837-425d-ab0f-72c79dbc4399-node-pullsecrets\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aee60016-61c2-4f4d-b181-59c1def12eef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772880 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krddv\" (UniqueName: \"kubernetes.io/projected/228a9802-8837-425d-ab0f-72c79dbc4399-kube-api-access-krddv\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772906 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-serving-cert\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772934 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7snp\" (UniqueName: \"kubernetes.io/projected/64747ec7-e06d-406d-8c6e-332b1cbe179f-kube-api-access-w7snp\") pod \"migrator-59844c95c7-ck2xk\" (UID: \"64747ec7-e06d-406d-8c6e-332b1cbe179f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772977 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4wvb\" (UniqueName: \"kubernetes.io/projected/73ed3342-c0c6-46e6-a021-e3c6578829f6-kube-api-access-c4wvb\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773000 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3d20e62-3892-4e70-adad-754ac75dd1b9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773021 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9plh\" (UniqueName: \"kubernetes.io/projected/021bee51-757d-4fcb-97b6-af9ad74d569c-kube-api-access-d9plh\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773056 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6nlg\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-kube-api-access-g6nlg\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773084 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35f83dc0-1687-4716-b61f-e7bbb921d1c2-trusted-ca\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773110 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-plugins-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773138 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b64a6e2-e14a-4de0-8630-e617a55b0794-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zl5zd\" (UID: \"3b64a6e2-e14a-4de0-8630-e617a55b0794\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773161 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34dd983a-2ee5-48ad-8858-59e9c0cbf483-apiservice-cert\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773184 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/021bee51-757d-4fcb-97b6-af9ad74d569c-profile-collector-cert\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773227 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-trusted-ca-bundle\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773280 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7e86cdb-22d7-424c-a51e-61c1d7848655-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773320 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-mountpoint-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773344 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34dd983a-2ee5-48ad-8858-59e9c0cbf483-webhook-cert\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773461 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22582d21-813c-49a4-aa49-e4a7d3f0f638-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773536 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-ca-trust-extracted\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773848 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-socket-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773943 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw68v\" (UniqueName: \"kubernetes.io/projected/8462a28b-a255-4ec7-9e85-cb98c6666e68-kube-api-access-zw68v\") pod \"package-server-manager-789f6589d5-9w9rw\" (UID: \"8462a28b-a255-4ec7-9e85-cb98c6666e68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773986 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774014 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774042 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jlhnm\" (UID: \"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774131 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-config\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774157 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-etcd-client\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774184 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23e5da7c-be56-4259-ab49-bf8ad50831fe-service-ca-bundle\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774215 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcj4k\" (UniqueName: \"kubernetes.io/projected/7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3-kube-api-access-vcj4k\") pod \"cluster-samples-operator-665b6dd947-jlhnm\" (UID: \"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774310 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-serving-cert\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774415 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkgw8\" (UniqueName: \"kubernetes.io/projected/3b64a6e2-e14a-4de0-8630-e617a55b0794-kube-api-access-kkgw8\") pod \"control-plane-machine-set-operator-78cbb6b69f-zl5zd\" (UID: \"3b64a6e2-e14a-4de0-8630-e617a55b0794\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774464 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-serving-cert\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774520 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmnkd\" (UniqueName: \"kubernetes.io/projected/0b009b00-dfa6-40ba-b629-608fc71dc429-kube-api-access-hmnkd\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774546 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774588 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-audit-policies\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774614 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z5rl\" (UniqueName: \"kubernetes.io/projected/23e5da7c-be56-4259-ab49-bf8ad50831fe-kube-api-access-2z5rl\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774936 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/228a9802-8837-425d-ab0f-72c79dbc4399-node-pullsecrets\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.775514 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.775689 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35f83dc0-1687-4716-b61f-e7bbb921d1c2-serving-cert\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.776707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-trusted-ca-bundle\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.776967 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774586 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777084 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p5vg\" (UniqueName: \"kubernetes.io/projected/22582d21-813c-49a4-aa49-e4a7d3f0f638-kube-api-access-4p5vg\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777147 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx78k\" (UniqueName: \"kubernetes.io/projected/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-kube-api-access-gx78k\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777204 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2b27a90-ce04-40f3-9656-148cca792c55-audit-dir\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777362 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777385 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2b27a90-ce04-40f3-9656-148cca792c55-audit-dir\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777434 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777458 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-registration-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777477 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d20e62-3892-4e70-adad-754ac75dd1b9-config\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777531 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szbm8\" (UniqueName: \"kubernetes.io/projected/34dd983a-2ee5-48ad-8858-59e9c0cbf483-kube-api-access-szbm8\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777655 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-ca-trust-extracted\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778124 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvm7b\" (UniqueName: \"kubernetes.io/projected/35f83dc0-1687-4716-b61f-e7bbb921d1c2-kube-api-access-lvm7b\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778123 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-serving-cert\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-bound-sa-token\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778282 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b009b00-dfa6-40ba-b629-608fc71dc429-config-volume\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778376 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh7hg\" (UniqueName: \"kubernetes.io/projected/fc6a5b86-a925-4f00-b0ed-19717e7e1f09-kube-api-access-fh7hg\") pod \"dns-operator-744455d44c-lt58x\" (UID: \"fc6a5b86-a925-4f00-b0ed-19717e7e1f09\") " pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778427 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqzfz\" (UniqueName: \"kubernetes.io/projected/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-kube-api-access-wqzfz\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2sw9\" (UniqueName: \"kubernetes.io/projected/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-kube-api-access-d2sw9\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778600 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fft9q\" (UniqueName: \"kubernetes.io/projected/9185d26f-44b3-45e3-9417-11148a03a52d-kube-api-access-fft9q\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778741 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778781 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-certs\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778832 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b009b00-dfa6-40ba-b629-608fc71dc429-secret-volume\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778859 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778866 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf98x\" (UniqueName: \"kubernetes.io/projected/1df285f9-7ae4-4fea-8817-0a7e5e851551-kube-api-access-sf98x\") pod \"ingress-canary-m7pv7\" (UID: \"1df285f9-7ae4-4fea-8817-0a7e5e851551\") " pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778942 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778986 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35f83dc0-1687-4716-b61f-e7bbb921d1c2-trusted-ca\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779053 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-audit\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779082 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f83dc0-1687-4716-b61f-e7bbb921d1c2-config\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779129 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/021bee51-757d-4fcb-97b6-af9ad74d569c-srv-cert\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779151 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/177c9eb7-021d-4d7f-a044-8913469b4236-metrics-tls\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779352 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bfgv\" (UniqueName: \"kubernetes.io/projected/b7322fd9-681a-4d9a-83ac-9e74308f8747-kube-api-access-4bfgv\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779389 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-config\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779399 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779513 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-etcd-serving-ca\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779552 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-config\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-audit\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779876 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f83dc0-1687-4716-b61f-e7bbb921d1c2-config\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779915 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-audit-policies\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.780004 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-config\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.780094 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-etcd-serving-ca\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.780758 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-tls\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.784758 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-encryption-config\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.784941 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-etcd-client\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.785195 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jlhnm\" (UID: \"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.785724 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-serving-cert\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.785780 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.785830 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-oauth-config\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.788814 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.789147 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.793778 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-serving-cert\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.800789 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2kl5\" (UniqueName: \"kubernetes.io/projected/aee60016-61c2-4f4d-b181-59c1def12eef-kube-api-access-l2kl5\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.817919 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh87r\" (UniqueName: \"kubernetes.io/projected/18b46a58-b11c-4760-bd38-1c875c4ecf21-kube-api-access-xh87r\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.854947 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j82v\" (UniqueName: \"kubernetes.io/projected/a2b27a90-ce04-40f3-9656-148cca792c55-kube-api-access-4j82v\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.876095 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krddv\" (UniqueName: \"kubernetes.io/projected/228a9802-8837-425d-ab0f-72c79dbc4399-kube-api-access-krddv\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880583 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880741 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc6a5b86-a925-4f00-b0ed-19717e7e1f09-metrics-tls\") pod \"dns-operator-744455d44c-lt58x\" (UID: \"fc6a5b86-a925-4f00-b0ed-19717e7e1f09\") " pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880764 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880786 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/128520ce-9a27-454a-8394-efae24e83a7c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880803 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128520ce-9a27-454a-8394-efae24e83a7c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0951903b-474b-4279-b6ad-ab8920fd2d5b-config\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880838 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klv6p\" (UniqueName: \"kubernetes.io/projected/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-kube-api-access-klv6p\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-csi-data-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880881 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fl94\" (UniqueName: \"kubernetes.io/projected/128520ce-9a27-454a-8394-efae24e83a7c-kube-api-access-9fl94\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880899 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhs7f\" (UniqueName: \"kubernetes.io/projected/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-kube-api-access-xhs7f\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880917 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1df285f9-7ae4-4fea-8817-0a7e5e851551-cert\") pod \"ingress-canary-m7pv7\" (UID: \"1df285f9-7ae4-4fea-8817-0a7e5e851551\") " pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880935 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rtj9\" (UniqueName: \"kubernetes.io/projected/754cb6b5-90c5-4747-8ef0-28a7c6b02448-kube-api-access-6rtj9\") pod \"multus-admission-controller-857f4d67dd-bm22t\" (UID: \"754cb6b5-90c5-4747-8ef0-28a7c6b02448\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7snp\" (UniqueName: \"kubernetes.io/projected/64747ec7-e06d-406d-8c6e-332b1cbe179f-kube-api-access-w7snp\") pod \"migrator-59844c95c7-ck2xk\" (UID: \"64747ec7-e06d-406d-8c6e-332b1cbe179f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880972 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4wvb\" (UniqueName: \"kubernetes.io/projected/73ed3342-c0c6-46e6-a021-e3c6578829f6-kube-api-access-c4wvb\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3d20e62-3892-4e70-adad-754ac75dd1b9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881010 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9plh\" (UniqueName: \"kubernetes.io/projected/021bee51-757d-4fcb-97b6-af9ad74d569c-kube-api-access-d9plh\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881033 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-plugins-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881053 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b64a6e2-e14a-4de0-8630-e617a55b0794-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zl5zd\" (UID: \"3b64a6e2-e14a-4de0-8630-e617a55b0794\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881073 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34dd983a-2ee5-48ad-8858-59e9c0cbf483-apiservice-cert\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881089 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/021bee51-757d-4fcb-97b6-af9ad74d569c-profile-collector-cert\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881108 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7e86cdb-22d7-424c-a51e-61c1d7848655-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881124 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-socket-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881142 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-mountpoint-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881156 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34dd983a-2ee5-48ad-8858-59e9c0cbf483-webhook-cert\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881180 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw68v\" (UniqueName: \"kubernetes.io/projected/8462a28b-a255-4ec7-9e85-cb98c6666e68-kube-api-access-zw68v\") pod \"package-server-manager-789f6589d5-9w9rw\" (UID: \"8462a28b-a255-4ec7-9e85-cb98c6666e68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881202 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23e5da7c-be56-4259-ab49-bf8ad50831fe-service-ca-bundle\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881227 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkgw8\" (UniqueName: \"kubernetes.io/projected/3b64a6e2-e14a-4de0-8630-e617a55b0794-kube-api-access-kkgw8\") pod \"control-plane-machine-set-operator-78cbb6b69f-zl5zd\" (UID: \"3b64a6e2-e14a-4de0-8630-e617a55b0794\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881264 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmnkd\" (UniqueName: \"kubernetes.io/projected/0b009b00-dfa6-40ba-b629-608fc71dc429-kube-api-access-hmnkd\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881279 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881300 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z5rl\" (UniqueName: \"kubernetes.io/projected/23e5da7c-be56-4259-ab49-bf8ad50831fe-kube-api-access-2z5rl\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881315 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx78k\" (UniqueName: \"kubernetes.io/projected/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-kube-api-access-gx78k\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881332 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-registration-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881376 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d20e62-3892-4e70-adad-754ac75dd1b9-config\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881393 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szbm8\" (UniqueName: \"kubernetes.io/projected/34dd983a-2ee5-48ad-8858-59e9c0cbf483-kube-api-access-szbm8\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881417 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b009b00-dfa6-40ba-b629-608fc71dc429-config-volume\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881432 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh7hg\" (UniqueName: \"kubernetes.io/projected/fc6a5b86-a925-4f00-b0ed-19717e7e1f09-kube-api-access-fh7hg\") pod \"dns-operator-744455d44c-lt58x\" (UID: \"fc6a5b86-a925-4f00-b0ed-19717e7e1f09\") " pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881456 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fft9q\" (UniqueName: \"kubernetes.io/projected/9185d26f-44b3-45e3-9417-11148a03a52d-kube-api-access-fft9q\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-certs\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881500 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b009b00-dfa6-40ba-b629-608fc71dc429-secret-volume\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881515 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf98x\" (UniqueName: \"kubernetes.io/projected/1df285f9-7ae4-4fea-8817-0a7e5e851551-kube-api-access-sf98x\") pod \"ingress-canary-m7pv7\" (UID: \"1df285f9-7ae4-4fea-8817-0a7e5e851551\") " pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881532 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/021bee51-757d-4fcb-97b6-af9ad74d569c-srv-cert\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881548 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/177c9eb7-021d-4d7f-a044-8913469b4236-metrics-tls\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881563 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bfgv\" (UniqueName: \"kubernetes.io/projected/b7322fd9-681a-4d9a-83ac-9e74308f8747-kube-api-access-4bfgv\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881589 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fc95478e-4574-4010-8833-5da4ec1987b3-signing-key\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881609 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttzz4\" (UniqueName: \"kubernetes.io/projected/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-kube-api-access-ttzz4\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881623 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-default-certificate\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881647 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881661 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34dd983a-2ee5-48ad-8858-59e9c0cbf483-tmpfs\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881675 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-stats-auth\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881691 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/754cb6b5-90c5-4747-8ef0-28a7c6b02448-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bm22t\" (UID: \"754cb6b5-90c5-4747-8ef0-28a7c6b02448\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881707 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/177c9eb7-021d-4d7f-a044-8913469b4236-config-volume\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881721 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0951903b-474b-4279-b6ad-ab8920fd2d5b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881737 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e86cdb-22d7-424c-a51e-61c1d7848655-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881753 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fc95478e-4574-4010-8833-5da4ec1987b3-signing-cabundle\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881770 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9185d26f-44b3-45e3-9417-11148a03a52d-config\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881785 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-proxy-tls\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881801 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-srv-cert\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hcxd\" (UniqueName: \"kubernetes.io/projected/8b5270e1-81d3-477a-96f9-b2cbc3090288-kube-api-access-4hcxd\") pod \"downloads-7954f5f757-kxrb8\" (UID: \"8b5270e1-81d3-477a-96f9-b2cbc3090288\") " pod="openshift-console/downloads-7954f5f757-kxrb8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881835 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-proxy-tls\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881849 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3d20e62-3892-4e70-adad-754ac75dd1b9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881865 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8462a28b-a255-4ec7-9e85-cb98c6666e68-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9w9rw\" (UID: \"8462a28b-a255-4ec7-9e85-cb98c6666e68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881879 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-metrics-certs\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881895 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-images\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881912 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881928 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7e86cdb-22d7-424c-a51e-61c1d7848655-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881957 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k5ns\" (UniqueName: \"kubernetes.io/projected/177c9eb7-021d-4d7f-a044-8913469b4236-kube-api-access-7k5ns\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9185d26f-44b3-45e3-9417-11148a03a52d-serving-cert\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-node-bootstrap-token\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.882007 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnzbj\" (UniqueName: \"kubernetes.io/projected/fc95478e-4574-4010-8833-5da4ec1987b3-kube-api-access-lnzbj\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.882022 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0951903b-474b-4279-b6ad-ab8920fd2d5b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: E0221 06:49:27.882272 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.382258317 +0000 UTC m=+143.415342515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.883047 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr"] Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.885890 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc6a5b86-a925-4f00-b0ed-19717e7e1f09-metrics-tls\") pod \"dns-operator-744455d44c-lt58x\" (UID: \"fc6a5b86-a925-4f00-b0ed-19717e7e1f09\") " pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.886370 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.886412 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.888199 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.889112 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34dd983a-2ee5-48ad-8858-59e9c0cbf483-apiservice-cert\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.889799 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-mountpoint-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.889824 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-socket-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.889944 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-registration-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.891345 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-plugins-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.891380 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23e5da7c-be56-4259-ab49-bf8ad50831fe-service-ca-bundle\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.892730 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34dd983a-2ee5-48ad-8858-59e9c0cbf483-tmpfs\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.892883 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.892976 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d20e62-3892-4e70-adad-754ac75dd1b9-config\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.893460 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128520ce-9a27-454a-8394-efae24e83a7c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.894084 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b64a6e2-e14a-4de0-8630-e617a55b0794-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zl5zd\" (UID: \"3b64a6e2-e14a-4de0-8630-e617a55b0794\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.895100 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/177c9eb7-021d-4d7f-a044-8913469b4236-config-volume\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.896437 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-metrics-certs\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.896435 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-images\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.897300 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/021bee51-757d-4fcb-97b6-af9ad74d569c-profile-collector-cert\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.898051 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcj4k\" (UniqueName: \"kubernetes.io/projected/7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3-kube-api-access-vcj4k\") pod \"cluster-samples-operator-665b6dd947-jlhnm\" (UID: \"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.898420 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1df285f9-7ae4-4fea-8817-0a7e5e851551-cert\") pod \"ingress-canary-m7pv7\" (UID: \"1df285f9-7ae4-4fea-8817-0a7e5e851551\") " pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.898456 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0951903b-474b-4279-b6ad-ab8920fd2d5b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.898910 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-csi-data-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.899179 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e86cdb-22d7-424c-a51e-61c1d7848655-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.899437 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34dd983a-2ee5-48ad-8858-59e9c0cbf483-webhook-cert\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.900562 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fc95478e-4574-4010-8833-5da4ec1987b3-signing-cabundle\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.901127 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9185d26f-44b3-45e3-9417-11148a03a52d-config\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.901360 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/128520ce-9a27-454a-8394-efae24e83a7c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.904611 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-certs\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.906438 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b009b00-dfa6-40ba-b629-608fc71dc429-config-volume\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.907519 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/754cb6b5-90c5-4747-8ef0-28a7c6b02448-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bm22t\" (UID: \"754cb6b5-90c5-4747-8ef0-28a7c6b02448\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.908118 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7e86cdb-22d7-424c-a51e-61c1d7848655-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.908499 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-srv-cert\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.909139 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9185d26f-44b3-45e3-9417-11148a03a52d-serving-cert\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.909330 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.909495 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/021bee51-757d-4fcb-97b6-af9ad74d569c-srv-cert\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.909530 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3d20e62-3892-4e70-adad-754ac75dd1b9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.913547 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-node-bootstrap-token\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.915142 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cgbv7"] Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.915872 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0951903b-474b-4279-b6ad-ab8920fd2d5b-config\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.916712 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fc95478e-4574-4010-8833-5da4ec1987b3-signing-key\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.917309 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8462a28b-a255-4ec7-9e85-cb98c6666e68-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9w9rw\" (UID: \"8462a28b-a255-4ec7-9e85-cb98c6666e68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.917393 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-proxy-tls\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.919152 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/177c9eb7-021d-4d7f-a044-8913469b4236-metrics-tls\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.921064 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-proxy-tls\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.921728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-stats-auth\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.922223 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-default-certificate\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.924022 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b009b00-dfa6-40ba-b629-608fc71dc429-secret-volume\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.927152 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p5vg\" (UniqueName: \"kubernetes.io/projected/22582d21-813c-49a4-aa49-e4a7d3f0f638-kube-api-access-4p5vg\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.931491 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: W0221 06:49:27.933146 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77141b4f_e31f_4e63_a5cb_329ea918a5ed.slice/crio-da9cd2c7bbfcd5cf1adefd8808be6f260ae90265718025da95a152328530f320 WatchSource:0}: Error finding container da9cd2c7bbfcd5cf1adefd8808be6f260ae90265718025da95a152328530f320: Status 404 returned error can't find the container with id da9cd2c7bbfcd5cf1adefd8808be6f260ae90265718025da95a152328530f320 Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.938457 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6nlg\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-kube-api-access-g6nlg\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.938648 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.945545 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.957744 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvm7b\" (UniqueName: \"kubernetes.io/projected/35f83dc0-1687-4716-b61f-e7bbb921d1c2-kube-api-access-lvm7b\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.966479 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w"] Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.971121 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.977533 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqzfz\" (UniqueName: \"kubernetes.io/projected/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-kube-api-access-wqzfz\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.983162 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.983302 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: E0221 06:49:27.983497 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.483483911 +0000 UTC m=+143.516568109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:27 crc kubenswrapper[4820]: W0221 06:49:27.986866 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86913b03_f631_4bfa_8533_c43326d364ff.slice/crio-986002aa23ccab6be3a4e3d8469782050e335a7456d1a10908b3268d1c08f1d5 WatchSource:0}: Error finding container 986002aa23ccab6be3a4e3d8469782050e335a7456d1a10908b3268d1c08f1d5: Status 404 returned error can't find the container with id 986002aa23ccab6be3a4e3d8469782050e335a7456d1a10908b3268d1c08f1d5 Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.997482 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-bound-sa-token\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.025907 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2sw9\" (UniqueName: \"kubernetes.io/projected/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-kube-api-access-d2sw9\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.056627 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0951903b-474b-4279-b6ad-ab8920fd2d5b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.076649 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh7hg\" (UniqueName: \"kubernetes.io/projected/fc6a5b86-a925-4f00-b0ed-19717e7e1f09-kube-api-access-fh7hg\") pod \"dns-operator-744455d44c-lt58x\" (UID: \"fc6a5b86-a925-4f00-b0ed-19717e7e1f09\") " pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.083919 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.087747 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.587719286 +0000 UTC m=+143.620803484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.103610 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bfgv\" (UniqueName: \"kubernetes.io/projected/b7322fd9-681a-4d9a-83ac-9e74308f8747-kube-api-access-4bfgv\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.118333 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fft9q\" (UniqueName: \"kubernetes.io/projected/9185d26f-44b3-45e3-9417-11148a03a52d-kube-api-access-fft9q\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.126683 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.137162 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3d20e62-3892-4e70-adad-754ac75dd1b9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.160531 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rtj9\" (UniqueName: \"kubernetes.io/projected/754cb6b5-90c5-4747-8ef0-28a7c6b02448-kube-api-access-6rtj9\") pod \"multus-admission-controller-857f4d67dd-bm22t\" (UID: \"754cb6b5-90c5-4747-8ef0-28a7c6b02448\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.172894 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.173317 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.182002 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.183399 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7snp\" (UniqueName: \"kubernetes.io/projected/64747ec7-e06d-406d-8c6e-332b1cbe179f-kube-api-access-w7snp\") pod \"migrator-59844c95c7-ck2xk\" (UID: \"64747ec7-e06d-406d-8c6e-332b1cbe179f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.188025 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.188711 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.688691982 +0000 UTC m=+143.721776180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.190542 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cgbzf"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.190736 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.203509 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.217957 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4wvb\" (UniqueName: \"kubernetes.io/projected/73ed3342-c0c6-46e6-a021-e3c6578829f6-kube-api-access-c4wvb\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.232509 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.236298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmnkd\" (UniqueName: \"kubernetes.io/projected/0b009b00-dfa6-40ba-b629-608fc71dc429-kube-api-access-hmnkd\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.245608 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7e86cdb-22d7-424c-a51e-61c1d7848655-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.255171 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f6j4c"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.256970 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.259135 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw68v\" (UniqueName: \"kubernetes.io/projected/8462a28b-a255-4ec7-9e85-cb98c6666e68-kube-api-access-zw68v\") pod \"package-server-manager-789f6589d5-9w9rw\" (UID: \"8462a28b-a255-4ec7-9e85-cb98c6666e68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.276636 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9plh\" (UniqueName: \"kubernetes.io/projected/021bee51-757d-4fcb-97b6-af9ad74d569c-kube-api-access-d9plh\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.292669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.294211 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.794194956 +0000 UTC m=+143.827279154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.294284 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.296030 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4kcq6"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.299955 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhs7f\" (UniqueName: \"kubernetes.io/projected/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-kube-api-access-xhs7f\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.318588 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkgw8\" (UniqueName: \"kubernetes.io/projected/3b64a6e2-e14a-4de0-8630-e617a55b0794-kube-api-access-kkgw8\") pod \"control-plane-machine-set-operator-78cbb6b69f-zl5zd\" (UID: \"3b64a6e2-e14a-4de0-8630-e617a55b0794\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.333916 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" Feb 21 06:49:28 crc kubenswrapper[4820]: W0221 06:49:28.335436 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaee60016_61c2_4f4d_b181_59c1def12eef.slice/crio-08a35d772dee5ff6aa6b3e3a0e88a016649448957c3f42ce9386938e5a704fc6 WatchSource:0}: Error finding container 08a35d772dee5ff6aa6b3e3a0e88a016649448957c3f42ce9386938e5a704fc6: Status 404 returned error can't find the container with id 08a35d772dee5ff6aa6b3e3a0e88a016649448957c3f42ce9386938e5a704fc6 Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.338428 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z5rl\" (UniqueName: \"kubernetes.io/projected/23e5da7c-be56-4259-ab49-bf8ad50831fe-kube-api-access-2z5rl\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.342660 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.349749 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.358370 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.359498 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx78k\" (UniqueName: \"kubernetes.io/projected/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-kube-api-access-gx78k\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.365966 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.376358 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.395791 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szbm8\" (UniqueName: \"kubernetes.io/projected/34dd983a-2ee5-48ad-8858-59e9c0cbf483-kube-api-access-szbm8\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.397025 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.398269 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.898250835 +0000 UTC m=+143.931335033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.414807 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttzz4\" (UniqueName: \"kubernetes.io/projected/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-kube-api-access-ttzz4\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.416409 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.420529 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k5ns\" (UniqueName: \"kubernetes.io/projected/177c9eb7-021d-4d7f-a044-8913469b4236-kube-api-access-7k5ns\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.432279 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.440007 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnzbj\" (UniqueName: \"kubernetes.io/projected/fc95478e-4574-4010-8833-5da4ec1987b3-kube-api-access-lnzbj\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.440505 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.447127 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.454831 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.461703 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.462106 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" event={"ID":"a2b27a90-ce04-40f3-9656-148cca792c55","Type":"ContainerStarted","Data":"163e0224df79387e94d53de67771865cc2f448fe55307754f0c2f2e2575f77bd"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.463430 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klv6p\" (UniqueName: \"kubernetes.io/projected/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-kube-api-access-klv6p\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.467323 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cgbzf" event={"ID":"18b46a58-b11c-4760-bd38-1c875c4ecf21","Type":"ContainerStarted","Data":"0767d187d2981c7d5f1c668b318887301f7e5326b2d0aaa6f0c17cc8530104d7"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.469270 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.474273 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" event={"ID":"86913b03-f631-4bfa-8533-c43326d364ff","Type":"ContainerStarted","Data":"d389d881740115e89bc79d39f4df733cb9bf875d1fdcda64285a0a8ad8bf1d6d"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.474316 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" event={"ID":"86913b03-f631-4bfa-8533-c43326d364ff","Type":"ContainerStarted","Data":"986002aa23ccab6be3a4e3d8469782050e335a7456d1a10908b3268d1c08f1d5"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.475702 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.482989 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.483587 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fl94\" (UniqueName: \"kubernetes.io/projected/128520ce-9a27-454a-8394-efae24e83a7c-kube-api-access-9fl94\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.491909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" event={"ID":"a584a459-0672-47ef-bb32-c79f31790f91","Type":"ContainerStarted","Data":"d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.492881 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.506696 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.507614 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nnhcf"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.507954 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.508252 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.008223015 +0000 UTC m=+144.041307213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.508913 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" event={"ID":"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3","Type":"ContainerStarted","Data":"668acdd49cb90779a5a4c90c70308189cbe53ef18f3d1f8f218e2da60e56e210"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.512805 4820 generic.go:334] "Generic (PLEG): container finished" podID="b6775f10-01f3-4263-8441-ec5be6baf5c3" containerID="f95713f2b6136ef696eafaeccf90f622806005b5330fb832ba828e64c86fa12b" exitCode=0 Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.512889 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" event={"ID":"b6775f10-01f3-4263-8441-ec5be6baf5c3","Type":"ContainerDied","Data":"f95713f2b6136ef696eafaeccf90f622806005b5330fb832ba828e64c86fa12b"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.512928 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" event={"ID":"b6775f10-01f3-4263-8441-ec5be6baf5c3","Type":"ContainerStarted","Data":"e4811bc3cd2f451b0ef29d261813103460d388e5a0a61cb580f2cb7e92dcfcab"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.513475 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf98x\" (UniqueName: \"kubernetes.io/projected/1df285f9-7ae4-4fea-8817-0a7e5e851551-kube-api-access-sf98x\") pod \"ingress-canary-m7pv7\" (UID: \"1df285f9-7ae4-4fea-8817-0a7e5e851551\") " pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.529394 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" event={"ID":"6e921dcf-57ab-41e2-9994-fb602aeec37f","Type":"ContainerStarted","Data":"3ab60d9b3f52c70470320562b18fdde34067515c8b35d8dc6c25b6e59a035ed5"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.529452 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" event={"ID":"6e921dcf-57ab-41e2-9994-fb602aeec37f","Type":"ContainerStarted","Data":"03ba3f049be9777508c6bf133e2f4c8552c6d09792d0971a629871cc98dc8ffd"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.532393 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hcxd\" (UniqueName: \"kubernetes.io/projected/8b5270e1-81d3-477a-96f9-b2cbc3090288-kube-api-access-4hcxd\") pod \"downloads-7954f5f757-kxrb8\" (UID: \"8b5270e1-81d3-477a-96f9-b2cbc3090288\") " pod="openshift-console/downloads-7954f5f757-kxrb8" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.539196 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.546714 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" event={"ID":"bec4e07b-2745-4a45-8717-3ee01f99919e","Type":"ContainerStarted","Data":"a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.547877 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.548914 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.556197 4820 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dhsbz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.556293 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" podUID="bec4e07b-2745-4a45-8717-3ee01f99919e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.576770 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" event={"ID":"35f83dc0-1687-4716-b61f-e7bbb921d1c2","Type":"ContainerStarted","Data":"296c1c427e3b90697c7d0dcd2e934a82975b980d70a3cab9c3a7d3ad43fcbfef"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.613123 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.614703 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.114692638 +0000 UTC m=+144.147776836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.626343 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.629713 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" event={"ID":"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52","Type":"ContainerStarted","Data":"009711a7878119d1466526558a1345d8fbde1d13b4a5b3fc08ae790a869b47df"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.630182 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" event={"ID":"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52","Type":"ContainerStarted","Data":"0d7642ab5cfabcea5ad30e4983844870b8a7571f1f485792a8ad4f08f2d8a036"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.669363 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.673812 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" event={"ID":"77141b4f-e31f-4e63-a5cb-329ea918a5ed","Type":"ContainerStarted","Data":"6ea00a240ec6b4bd80c7ed6defb6fc48d83a0c5bf192140ba4f3f57d1c95b56e"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.673857 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" event={"ID":"77141b4f-e31f-4e63-a5cb-329ea918a5ed","Type":"ContainerStarted","Data":"da9cd2c7bbfcd5cf1adefd8808be6f260ae90265718025da95a152328530f320"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.691362 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-kxrb8" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.691787 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.719758 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.719843 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.219829181 +0000 UTC m=+144.252913379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.720106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.722838 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.222830672 +0000 UTC m=+144.255914870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.763001 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" event={"ID":"aee60016-61c2-4f4d-b181-59c1def12eef","Type":"ContainerStarted","Data":"08a35d772dee5ff6aa6b3e3a0e88a016649448957c3f42ce9386938e5a704fc6"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.777306 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qhnw8"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.787503 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4dt74"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.812884 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.821206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.822248 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.322197399 +0000 UTC m=+144.355281597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.829395 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.830050 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.330037857 +0000 UTC m=+144.363122055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.930695 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.930893 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.430857898 +0000 UTC m=+144.463942106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.931816 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.932410 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.432396185 +0000 UTC m=+144.465480383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.476461 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:29 crc kubenswrapper[4820]: E0221 06:49:29.477787 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.477730907 +0000 UTC m=+145.510815135 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:29 crc kubenswrapper[4820]: W0221 06:49:29.513468 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7322fd9_681a_4d9a_83ac_9e74308f8747.slice/crio-250c336f3378d1c658b088f9f25f0b5abbb212c6bfc3885252266de5d2d7a024 WatchSource:0}: Error finding container 250c336f3378d1c658b088f9f25f0b5abbb212c6bfc3885252266de5d2d7a024: Status 404 returned error can't find the container with id 250c336f3378d1c658b088f9f25f0b5abbb212c6bfc3885252266de5d2d7a024 Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.524869 4820 csr.go:261] certificate signing request csr-v586p is approved, waiting to be issued Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.535729 4820 csr.go:257] certificate signing request csr-v586p is issued Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.552579 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8"] Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.573636 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tgf94"] Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.578131 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:29 crc kubenswrapper[4820]: E0221 06:49:29.579102 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.079080824 +0000 UTC m=+145.112165022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.592628 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt"] Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.630256 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk"] Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.636468 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr"] Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.684064 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:29 crc kubenswrapper[4820]: E0221 06:49:29.684389 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.184379031 +0000 UTC m=+145.217463229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.736288 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" podStartSLOduration=124.736266322 podStartE2EDuration="2m4.736266322s" podCreationTimestamp="2026-02-21 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:29.685984051 +0000 UTC m=+144.719068249" watchObservedRunningTime="2026-02-21 06:49:29.736266322 +0000 UTC m=+144.769350520" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.785055 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" podStartSLOduration=122.785037838 podStartE2EDuration="2m2.785037838s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:29.784481291 +0000 UTC m=+144.817565509" watchObservedRunningTime="2026-02-21 06:49:29.785037838 +0000 UTC m=+144.818122036" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.785875 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:29 crc kubenswrapper[4820]: E0221 06:49:29.786193 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.286178032 +0000 UTC m=+145.319262230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.863573 4820 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-f6j4c container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.863624 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" podUID="a2b27a90-ce04-40f3-9656-148cca792c55" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.877292 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" podStartSLOduration=122.877278037 podStartE2EDuration="2m2.877278037s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:29.875889865 +0000 UTC m=+144.908974063" watchObservedRunningTime="2026-02-21 06:49:29.877278037 +0000 UTC m=+144.910362235" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.886680 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:29 crc kubenswrapper[4820]: E0221 06:49:29.886959 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.386948372 +0000 UTC m=+145.420032570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.887947 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.888009 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" event={"ID":"228a9802-8837-425d-ab0f-72c79dbc4399","Type":"ContainerStarted","Data":"119e4cae90399f87711a5e28a083ae9bfdf8f68b0235530588df32471875cf14"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.888028 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-z7jtv" event={"ID":"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86","Type":"ContainerStarted","Data":"c086d7926efb60ba245172a9705ea0699ff50fe3572e5d1260f212299ed45b3d"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.888039 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" event={"ID":"a2b27a90-ce04-40f3-9656-148cca792c55","Type":"ContainerStarted","Data":"703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.888052 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cgbzf" event={"ID":"18b46a58-b11c-4760-bd38-1c875c4ecf21","Type":"ContainerStarted","Data":"abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.888064 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" event={"ID":"0951903b-474b-4279-b6ad-ab8920fd2d5b","Type":"ContainerStarted","Data":"7fbbba8bc2f9c68f660d632bd811dc3f8c5587f1030f382a4f8d537dd54563e1"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.888074 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-q9pg5" event={"ID":"23e5da7c-be56-4259-ab49-bf8ad50831fe","Type":"ContainerStarted","Data":"a7c0a8555f6c5bdcdd16168a7270153195c381e910a38dfadc8c6d8accd39bcc"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.891381 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" podStartSLOduration=123.891371007 podStartE2EDuration="2m3.891371007s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:29.890577983 +0000 UTC m=+144.923662181" watchObservedRunningTime="2026-02-21 06:49:29.891371007 +0000 UTC m=+144.924455195" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.891908 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" event={"ID":"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3","Type":"ContainerStarted","Data":"6f75c114844d6958dd08d020548972dc253bb2aab663deb3b5b62ecce93bada1"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.895795 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" event={"ID":"b7322fd9-681a-4d9a-83ac-9e74308f8747","Type":"ContainerStarted","Data":"250c336f3378d1c658b088f9f25f0b5abbb212c6bfc3885252266de5d2d7a024"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.912443 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" event={"ID":"aee60016-61c2-4f4d-b181-59c1def12eef","Type":"ContainerStarted","Data":"0e9394f904f8e69bd02c242770778e31a81212264bb19a6c916a15a7f90dfd7c"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.918529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" event={"ID":"35f83dc0-1687-4716-b61f-e7bbb921d1c2","Type":"ContainerStarted","Data":"ca66ca39715695f08524676ef281309b734ae222f7bc78c1b27e486b09f92969"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.920701 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.926013 4820 patch_prober.go:28] interesting pod/console-operator-58897d9998-4kcq6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.926060 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" podUID="35f83dc0-1687-4716-b61f-e7bbb921d1c2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.931750 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" event={"ID":"86913b03-f631-4bfa-8533-c43326d364ff","Type":"ContainerStarted","Data":"0862da7cd9add89f3ef0abf2fb7a4fcb4b661cab58657af342c14fa20ceeecf2"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.938295 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" event={"ID":"4cefa9c1-919e-4edc-95c9-d26c4f8f254f","Type":"ContainerStarted","Data":"ba44082e4cf50771d02085ed594cf66179a3893adfb4bcb572cc123a20fd2072"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.953444 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.004785 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.007862 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.507845124 +0000 UTC m=+145.540929322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.041027 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" podStartSLOduration=124.040998754 podStartE2EDuration="2m4.040998754s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.027617457 +0000 UTC m=+145.060701655" watchObservedRunningTime="2026-02-21 06:49:30.040998754 +0000 UTC m=+145.074082972" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.103645 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" podStartSLOduration=123.103626463 podStartE2EDuration="2m3.103626463s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.068589185 +0000 UTC m=+145.101673393" watchObservedRunningTime="2026-02-21 06:49:30.103626463 +0000 UTC m=+145.136710671" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.106961 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.111195 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.611178652 +0000 UTC m=+145.644262840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.210507 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.211896 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.711866969 +0000 UTC m=+145.744951167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.214746 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.222509 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.722491933 +0000 UTC m=+145.755576131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.229219 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" podStartSLOduration=124.229189718 podStartE2EDuration="2m4.229189718s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.223026129 +0000 UTC m=+145.256110327" watchObservedRunningTime="2026-02-21 06:49:30.229189718 +0000 UTC m=+145.262273916" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.315656 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.316047 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.816033463 +0000 UTC m=+145.849117661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.353792 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" podStartSLOduration=124.353776352 podStartE2EDuration="2m4.353776352s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.35306438 +0000 UTC m=+145.386148578" watchObservedRunningTime="2026-02-21 06:49:30.353776352 +0000 UTC m=+145.386860550" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.386660 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lt58x"] Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.417953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.418293 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.918281337 +0000 UTC m=+145.951365535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.519132 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.520225 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.020209191 +0000 UTC m=+146.053293389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.537966 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-21 06:44:29 +0000 UTC, rotation deadline is 2026-11-04 07:38:35.25467705 +0000 UTC Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.538011 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6144h49m4.716669156s for next certificate rotation Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.570077 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" podStartSLOduration=124.57006273 podStartE2EDuration="2m4.57006273s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.511334021 +0000 UTC m=+145.544418219" watchObservedRunningTime="2026-02-21 06:49:30.57006273 +0000 UTC m=+145.603146918" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.614581 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" podStartSLOduration=123.614562346 podStartE2EDuration="2m3.614562346s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.613564785 +0000 UTC m=+145.646648983" watchObservedRunningTime="2026-02-21 06:49:30.614562346 +0000 UTC m=+145.647646544" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.621000 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.621421 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.121404525 +0000 UTC m=+146.154488723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.701063 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-cgbzf" podStartSLOduration=124.701046761 podStartE2EDuration="2m4.701046761s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.699150182 +0000 UTC m=+145.732234390" watchObservedRunningTime="2026-02-21 06:49:30.701046761 +0000 UTC m=+145.734130959" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.724878 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.725135 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.225095893 +0000 UTC m=+146.258180091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.725247 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.725728 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.225719882 +0000 UTC m=+146.258804080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.827868 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.828391 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.328372508 +0000 UTC m=+146.361456706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.931876 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.932380 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.432364107 +0000 UTC m=+146.465448305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.948124 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" event={"ID":"9185d26f-44b3-45e3-9417-11148a03a52d","Type":"ContainerStarted","Data":"e1f0606b9400e1be111b8fa11abcf23669076791cf0deba2791f7aab27698ab8"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.953384 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" event={"ID":"fc6a5b86-a925-4f00-b0ed-19717e7e1f09","Type":"ContainerStarted","Data":"b0b84810729f08f6f4a0a76d41d52e508b32f4e7c40166318bec1eec3e01a8be"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.955420 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" event={"ID":"4cefa9c1-919e-4edc-95c9-d26c4f8f254f","Type":"ContainerStarted","Data":"613a9c144c73d7dd6ec5f12bb0d0662ce55ba82b427a1fa572bda8e7a2dfdd4d"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.958847 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-q9pg5" event={"ID":"23e5da7c-be56-4259-ab49-bf8ad50831fe","Type":"ContainerStarted","Data":"95ce60616c680a0a70b12deed4684771b8f6db766e3d757a3a62b08aab238c19"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.972392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" event={"ID":"eca19fe2-b995-48cf-974d-e3fc59f8b9b3","Type":"ContainerStarted","Data":"2894c8e47b473d8cee0f0aefd641f1f2683c3376a2bf810ffbcfc723b4ff70f8"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.972432 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" event={"ID":"eca19fe2-b995-48cf-974d-e3fc59f8b9b3","Type":"ContainerStarted","Data":"b50b0f22da172cb023b14eaa761cac26db58be900e058aa95640679057e584d7"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.977675 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" event={"ID":"b6775f10-01f3-4263-8441-ec5be6baf5c3","Type":"ContainerStarted","Data":"93680276b6385e733428fe927807c8cf85ff66c9bf91666f33c34a34f9ca4ebd"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.978767 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" event={"ID":"64747ec7-e06d-406d-8c6e-332b1cbe179f","Type":"ContainerStarted","Data":"fa51decaa09b0bb3accdc2e5dcd9fc2db2b58b72a82bc27d54517e9a1e590d87"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.979299 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" event={"ID":"22582d21-813c-49a4-aa49-e4a7d3f0f638","Type":"ContainerStarted","Data":"f22cd2b646f76376735fec05b746210afcf303b742216a5ba7e6c9bbca4e2cb9"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.979987 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" event={"ID":"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8","Type":"ContainerStarted","Data":"664261fec01aa563e66f61e44ce1cbae1fe7a29a0ccd589db01d6e50f133905e"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.980011 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" event={"ID":"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8","Type":"ContainerStarted","Data":"810a68ff70084041c6eb28f80a1cb0b74ac91eb42a212ecb2271f8c2b08b95cf"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.981025 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" event={"ID":"228a9802-8837-425d-ab0f-72c79dbc4399","Type":"ContainerStarted","Data":"d86c5e28e8430789d19126b10551b6ecbc425f4dfa078132742ed0399d850bcb"} Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.001370 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" event={"ID":"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3","Type":"ContainerStarted","Data":"1aa2781f8a7ff2588e5f11dec6fdbb1dd65b5fc040d316949a44b5804a171cf6"} Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.022608 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" podStartSLOduration=125.022593525 podStartE2EDuration="2m5.022593525s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.997588533 +0000 UTC m=+146.030672721" watchObservedRunningTime="2026-02-21 06:49:31.022593525 +0000 UTC m=+146.055677723" Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.040174 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.040756 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.540733067 +0000 UTC m=+146.573817255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.041562 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" podStartSLOduration=124.041541372 podStartE2EDuration="2m4.041541372s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:31.035911051 +0000 UTC m=+146.068995249" watchObservedRunningTime="2026-02-21 06:49:31.041541372 +0000 UTC m=+146.074625570" Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.064555 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-q9pg5" podStartSLOduration=124.064538622 podStartE2EDuration="2m4.064538622s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:31.063464589 +0000 UTC m=+146.096548787" watchObservedRunningTime="2026-02-21 06:49:31.064538622 +0000 UTC m=+146.097622820" Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.141292 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.148854 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.64883601 +0000 UTC m=+146.681920278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.155781 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" podStartSLOduration=125.155764181 podStartE2EDuration="2m5.155764181s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:31.148230382 +0000 UTC m=+146.181314580" watchObservedRunningTime="2026-02-21 06:49:31.155764181 +0000 UTC m=+146.188848379" Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.204922 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.242359 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.246982 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.247413 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.747388743 +0000 UTC m=+146.780472941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.272948 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.280458 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.288310 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fm6pk"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.343784 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:31 crc kubenswrapper[4820]: W0221 06:49:31.346483 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8462a28b_a255_4ec7_9e85_cb98c6666e68.slice/crio-e85903e8a3d5d2be134b0e41ecdb1ba65b7f136fd62acaf7e4952736910196b2 WatchSource:0}: Error finding container e85903e8a3d5d2be134b0e41ecdb1ba65b7f136fd62acaf7e4952736910196b2: Status 404 returned error can't find the container with id e85903e8a3d5d2be134b0e41ecdb1ba65b7f136fd62acaf7e4952736910196b2 Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.346663 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.346716 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.348732 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.349216 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.849201993 +0000 UTC m=+146.882286191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.357345 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k58x6"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.361056 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bm22t"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.366929 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56"] Feb 21 06:49:31 crc kubenswrapper[4820]: W0221 06:49:31.372823 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3fe97d7_1fa0_42aa_b8df_66f25aa6ee60.slice/crio-c111b497ffc66028b22cfd8f6a8f221d0617df14ae13cdfff3d804d1e90a8c16 WatchSource:0}: Error finding container c111b497ffc66028b22cfd8f6a8f221d0617df14ae13cdfff3d804d1e90a8c16: Status 404 returned error can't find the container with id c111b497ffc66028b22cfd8f6a8f221d0617df14ae13cdfff3d804d1e90a8c16 Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.449421 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.449812 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.949794097 +0000 UTC m=+146.982878295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.524450 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-kxrb8"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.550472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.550737 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.050726482 +0000 UTC m=+147.083810680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.563592 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.570287 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.580547 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.595126 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.609318 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sps4j"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.630876 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m7pv7"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.630967 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.633508 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w"] Feb 21 06:49:31 crc kubenswrapper[4820]: W0221 06:49:31.644078 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b64a6e2_e14a_4de0_8630_e617a55b0794.slice/crio-69ed9139cf87dd7cabd9542db7d4d1f039ff47d48a0a38ef4335ab56a783a651 WatchSource:0}: Error finding container 69ed9139cf87dd7cabd9542db7d4d1f039ff47d48a0a38ef4335ab56a783a651: Status 404 returned error can't find the container with id 69ed9139cf87dd7cabd9542db7d4d1f039ff47d48a0a38ef4335ab56a783a651 Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.651655 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.652416 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.152399829 +0000 UTC m=+147.185484027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.660317 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.754162 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.754446 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.254434227 +0000 UTC m=+147.287518425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.857889 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.858603 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.35858322 +0000 UTC m=+147.391667418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.960536 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.960868 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.460858075 +0000 UTC m=+147.493942273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.061918 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.062059 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.562040478 +0000 UTC m=+147.595124676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.062382 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.062828 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.562811121 +0000 UTC m=+147.595895309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.070175 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" event={"ID":"a3d20e62-3892-4e70-adad-754ac75dd1b9","Type":"ContainerStarted","Data":"75f5ecd9e136d582bb3fdae89f6b9224aaaa0e10f700bcc50c9d929b5f3898a2"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.097725 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" event={"ID":"0951903b-474b-4279-b6ad-ab8920fd2d5b","Type":"ContainerStarted","Data":"f268503a7e18f87985241acaf291dd3a92711e804b2f87db40310ef274db4e24"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.100738 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sps4j" event={"ID":"177c9eb7-021d-4d7f-a044-8913469b4236","Type":"ContainerStarted","Data":"6aae7826ff160b1a23309dd94c12b3ff63c2f165074c23ce601001b0a1597c16"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.156485 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" podStartSLOduration=125.156460363 podStartE2EDuration="2m5.156460363s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.14058258 +0000 UTC m=+147.173666778" watchObservedRunningTime="2026-02-21 06:49:32.156460363 +0000 UTC m=+147.189544561" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.157281 4820 generic.go:334] "Generic (PLEG): container finished" podID="c8bb35a2-6708-4267-bb44-d80ff0e0ccc8" containerID="664261fec01aa563e66f61e44ce1cbae1fe7a29a0ccd589db01d6e50f133905e" exitCode=0 Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.157367 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" event={"ID":"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8","Type":"ContainerDied","Data":"664261fec01aa563e66f61e44ce1cbae1fe7a29a0ccd589db01d6e50f133905e"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.175019 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" event={"ID":"b7322fd9-681a-4d9a-83ac-9e74308f8747","Type":"ContainerStarted","Data":"781da944a068c97f47f327091b67409d1ecf3bfc685ab4af8b14a53542fa00f3"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.175629 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.176142 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.676126273 +0000 UTC m=+147.709210461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.181815 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" event={"ID":"128520ce-9a27-454a-8394-efae24e83a7c","Type":"ContainerStarted","Data":"1657bf67213cd1bbace5eec2cec8bec5a036cb00565c30662fb164524d3fdb2f"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.183697 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" event={"ID":"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a","Type":"ContainerStarted","Data":"fd00299839ff8b583b8f68c4fb8b34c1390dfeb7629ea3f2c2524369acef22d6"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.195232 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" event={"ID":"3b64a6e2-e14a-4de0-8630-e617a55b0794","Type":"ContainerStarted","Data":"69ed9139cf87dd7cabd9542db7d4d1f039ff47d48a0a38ef4335ab56a783a651"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.205999 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-z7jtv" event={"ID":"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86","Type":"ContainerStarted","Data":"0d894d47147c87510f7b2f8db8380a0162734e3ab1023e7780225c516e083f30"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.220934 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" event={"ID":"021bee51-757d-4fcb-97b6-af9ad74d569c","Type":"ContainerStarted","Data":"8df7b5a30dc2669ccc093edfe4d82344d5f472bcf7d22b0fa4e189008ed5304a"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.222637 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" event={"ID":"34dd983a-2ee5-48ad-8858-59e9c0cbf483","Type":"ContainerStarted","Data":"b5cae302d3120e5ca6fb5eda19eb1d11062e10a1a00c6c23268c91d0c97d973b"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.222676 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" event={"ID":"34dd983a-2ee5-48ad-8858-59e9c0cbf483","Type":"ContainerStarted","Data":"655e3135e8933c633ffbdf96fca90788386c6886ccd3b16d3f1333bb879ee01b"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.222989 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.224589 4820 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v8n56 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" start-of-body= Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.224642 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" podUID="34dd983a-2ee5-48ad-8858-59e9c0cbf483" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.232721 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" event={"ID":"0b009b00-dfa6-40ba-b629-608fc71dc429","Type":"ContainerStarted","Data":"88715bb258d3aa108b4b19be2aa570b41fc0e79301b3a41e96839d1839127be2"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.242588 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-z7jtv" podStartSLOduration=7.242562046 podStartE2EDuration="7.242562046s" podCreationTimestamp="2026-02-21 06:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.231933813 +0000 UTC m=+147.265018021" watchObservedRunningTime="2026-02-21 06:49:32.242562046 +0000 UTC m=+147.275646244" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.253715 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" event={"ID":"64747ec7-e06d-406d-8c6e-332b1cbe179f","Type":"ContainerStarted","Data":"033318c82ad418b91555861000478540445e6b68b26e38e0459b1fb47d5aed35"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.256456 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.256753 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.261944 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" event={"ID":"22582d21-813c-49a4-aa49-e4a7d3f0f638","Type":"ContainerStarted","Data":"b1589a3cd9e2b85240d12ce4483f39842ce6af690e6d0c59882ed375e457b7da"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.279604 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.279742 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kxrb8" event={"ID":"8b5270e1-81d3-477a-96f9-b2cbc3090288","Type":"ContainerStarted","Data":"d160963901a3108a72ebc4df3a2f84a300f9b921caf41105a23e76fb8cabeef1"} Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.281258 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.781223334 +0000 UTC m=+147.814307612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.297544 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" podStartSLOduration=125.297525121 podStartE2EDuration="2m5.297525121s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.262477793 +0000 UTC m=+147.295561991" watchObservedRunningTime="2026-02-21 06:49:32.297525121 +0000 UTC m=+147.330609319" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.298744 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" event={"ID":"fc6a5b86-a925-4f00-b0ed-19717e7e1f09","Type":"ContainerStarted","Data":"5e8218923c3874ee7ca02e1574d8cafa2d64e6dd64d22a77d151a9a74e726ace"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.315647 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" podStartSLOduration=126.315614442 podStartE2EDuration="2m6.315614442s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.314314902 +0000 UTC m=+147.347399100" watchObservedRunningTime="2026-02-21 06:49:32.315614442 +0000 UTC m=+147.348698640" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.322928 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" event={"ID":"fc95478e-4574-4010-8833-5da4ec1987b3","Type":"ContainerStarted","Data":"075d695218f3a3b627399b0b9bffc8957a71e4ecfdad6a8d24c8f6616e56ddf5"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.328906 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" podStartSLOduration=125.328888686 podStartE2EDuration="2m5.328888686s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.298481129 +0000 UTC m=+147.331565327" watchObservedRunningTime="2026-02-21 06:49:32.328888686 +0000 UTC m=+147.361972884" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.336300 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" podStartSLOduration=126.336285211 podStartE2EDuration="2m6.336285211s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.335359583 +0000 UTC m=+147.368443781" watchObservedRunningTime="2026-02-21 06:49:32.336285211 +0000 UTC m=+147.369369409" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.350970 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:32 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:32 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:32 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.351025 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.363876 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" podStartSLOduration=125.363860351 podStartE2EDuration="2m5.363860351s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.363436008 +0000 UTC m=+147.396520206" watchObservedRunningTime="2026-02-21 06:49:32.363860351 +0000 UTC m=+147.396944549" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.369870 4820 generic.go:334] "Generic (PLEG): container finished" podID="228a9802-8837-425d-ab0f-72c79dbc4399" containerID="d86c5e28e8430789d19126b10551b6ecbc425f4dfa078132742ed0399d850bcb" exitCode=0 Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.369938 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" event={"ID":"228a9802-8837-425d-ab0f-72c79dbc4399","Type":"ContainerDied","Data":"d86c5e28e8430789d19126b10551b6ecbc425f4dfa078132742ed0399d850bcb"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.369966 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" event={"ID":"228a9802-8837-425d-ab0f-72c79dbc4399","Type":"ContainerStarted","Data":"a8d1f6410a441aab0514e5273b878fa229c4f99c14eebdcd025495462bbf0297"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.381743 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.382182 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.882165708 +0000 UTC m=+147.915249906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.399966 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" event={"ID":"eca19fe2-b995-48cf-974d-e3fc59f8b9b3","Type":"ContainerStarted","Data":"264d4c8de9572d2fcdac3d5e161f64b07d03df73777e0a8fa2405f68a6fd7160"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.420334 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" event={"ID":"9185d26f-44b3-45e3-9417-11148a03a52d","Type":"ContainerStarted","Data":"38b52a348b47efeaf53e22da1432f82187d64641cb26195e01c692ec44875652"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.427749 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" event={"ID":"c7e86cdb-22d7-424c-a51e-61c1d7848655","Type":"ContainerStarted","Data":"67d754f5f9200f4e1ce2f8d57c344ef661c4338710d6947334863da9e37e6488"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.437470 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" podStartSLOduration=125.437453783 podStartE2EDuration="2m5.437453783s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.431959556 +0000 UTC m=+147.465043744" watchObservedRunningTime="2026-02-21 06:49:32.437453783 +0000 UTC m=+147.470537971" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.458670 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" event={"ID":"73ed3342-c0c6-46e6-a021-e3c6578829f6","Type":"ContainerStarted","Data":"d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.458748 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" event={"ID":"73ed3342-c0c6-46e6-a021-e3c6578829f6","Type":"ContainerStarted","Data":"c67db1d6ea1ea9f42d159552b399ae3814a8a2a153770e3fc34b2a49bbb171e0"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.460147 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.479749 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" podStartSLOduration=125.47972268 podStartE2EDuration="2m5.47972268s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.479120181 +0000 UTC m=+147.512204379" watchObservedRunningTime="2026-02-21 06:49:32.47972268 +0000 UTC m=+147.512806868" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.483677 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.485210 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.985193177 +0000 UTC m=+148.018277375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.487845 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k58x6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.487884 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.511904 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" podStartSLOduration=125.51188886 podStartE2EDuration="2m5.51188886s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.511264091 +0000 UTC m=+147.544348289" watchObservedRunningTime="2026-02-21 06:49:32.51188886 +0000 UTC m=+147.544973058" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.516715 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" event={"ID":"8462a28b-a255-4ec7-9e85-cb98c6666e68","Type":"ContainerStarted","Data":"d21ba6d705d92695af74bc91029e641818054af691d6487b81a95e8fd151b912"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.516759 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" event={"ID":"8462a28b-a255-4ec7-9e85-cb98c6666e68","Type":"ContainerStarted","Data":"e85903e8a3d5d2be134b0e41ecdb1ba65b7f136fd62acaf7e4952736910196b2"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.521046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" event={"ID":"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60","Type":"ContainerStarted","Data":"4c29039920c95eee332eecb7910c50771bfe18476a7e9f463874a6ebfb1dcaf5"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.521078 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" event={"ID":"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60","Type":"ContainerStarted","Data":"c111b497ffc66028b22cfd8f6a8f221d0617df14ae13cdfff3d804d1e90a8c16"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.522024 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.526619 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m7pv7" event={"ID":"1df285f9-7ae4-4fea-8817-0a7e5e851551","Type":"ContainerStarted","Data":"89b84e87bf852c692ff25a9e253f29c013798006dcfbb46b2aef64b5c1b037ba"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.545742 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" podStartSLOduration=125.545715731 podStartE2EDuration="2m5.545715731s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.539053107 +0000 UTC m=+147.572137305" watchObservedRunningTime="2026-02-21 06:49:32.545715731 +0000 UTC m=+147.578799929" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.555091 4820 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4dnsn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.555144 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" podUID="b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.561997 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" event={"ID":"754cb6b5-90c5-4747-8ef0-28a7c6b02448","Type":"ContainerStarted","Data":"8eb9a6dd730c2073d51dc2b87dd09bd59115a5dc4e29c78bbecb77e78fc45209"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.584837 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.591790 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.091747043 +0000 UTC m=+148.124831241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.596093 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.620925 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.120900501 +0000 UTC m=+148.153984689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.630113 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.683665 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-m7pv7" podStartSLOduration=7.683636871 podStartE2EDuration="7.683636871s" podCreationTimestamp="2026-02-21 06:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.583647656 +0000 UTC m=+147.616731854" watchObservedRunningTime="2026-02-21 06:49:32.683636871 +0000 UTC m=+147.716721069" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.699891 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.701827 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.201805815 +0000 UTC m=+148.234890013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.702023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.702432 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.202424834 +0000 UTC m=+148.235509032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.803696 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.804363 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.304347799 +0000 UTC m=+148.337431997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.804634 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.804940 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.304932936 +0000 UTC m=+148.338017134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.905665 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.906050 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.406030786 +0000 UTC m=+148.439114984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.007864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.008246 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.508217439 +0000 UTC m=+148.541301637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.109036 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.109157 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.609139543 +0000 UTC m=+148.642223731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.109308 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.109583 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.609574537 +0000 UTC m=+148.642658735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.210703 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.210901 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.710874172 +0000 UTC m=+148.743958380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.211002 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.211335 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.711326595 +0000 UTC m=+148.744410783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.311604 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.311799 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.811770665 +0000 UTC m=+148.844854863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.311906 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.312219 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.812205888 +0000 UTC m=+148.845290076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.349891 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:33 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:33 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:33 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.349986 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.413581 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.413714 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.91369502 +0000 UTC m=+148.946779208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.413850 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.414144 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.914136263 +0000 UTC m=+148.947220461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.514629 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.514819 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.014791579 +0000 UTC m=+149.047875777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.514910 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.515250 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.015223362 +0000 UTC m=+149.048307560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.568553 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m7pv7" event={"ID":"1df285f9-7ae4-4fea-8817-0a7e5e851551","Type":"ContainerStarted","Data":"7356d0b1eaa35efd0f5db64ba47b152274dc4d1a1d829988c16aed6c5d2acefe"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.571082 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" event={"ID":"021bee51-757d-4fcb-97b6-af9ad74d569c","Type":"ContainerStarted","Data":"4f7b07b2dcc928735d14be50607836c14b74ebde9973969595e23e8fe1bb36dd"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.571391 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.573494 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" event={"ID":"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a","Type":"ContainerStarted","Data":"54ef5cc00634eaf23d9c876294b375f9a2440e00825a2d82195323bd652bd25e"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.573541 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" event={"ID":"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a","Type":"ContainerStarted","Data":"b53d2ab90968edb86cb1293b5f0a321255f5ed82ea60cf7f33b1cecf7326d6f3"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.574874 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" event={"ID":"0b009b00-dfa6-40ba-b629-608fc71dc429","Type":"ContainerStarted","Data":"d8fad70d0ffc026935b7857a9983aa7bde367f1ccdb48c593f103452b34e3bae"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.577760 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kxrb8" event={"ID":"8b5270e1-81d3-477a-96f9-b2cbc3090288","Type":"ContainerStarted","Data":"85617112a4d4ea68145daddd309b7480a5180d7eef69357164533db8c10391ac"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.578148 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-kxrb8" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.579171 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxrb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.579208 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxrb8" podUID="8b5270e1-81d3-477a-96f9-b2cbc3090288" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.581284 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" event={"ID":"fc6a5b86-a925-4f00-b0ed-19717e7e1f09","Type":"ContainerStarted","Data":"c7d2264cdc6ef73ccff94b871ba65c20844e7dfb8d336423ca7ed6fc1b537385"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.585484 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sps4j" event={"ID":"177c9eb7-021d-4d7f-a044-8913469b4236","Type":"ContainerStarted","Data":"59514df78c99316b54ef1d4074c1cb4ff3529f4d26e0463762188af68dffa419"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.585546 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sps4j" event={"ID":"177c9eb7-021d-4d7f-a044-8913469b4236","Type":"ContainerStarted","Data":"3b5c98b5d5c70244ec42ac5e37567b08212d2dd9fd4247a4b77b9a3d541fa966"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.585665 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.588022 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" event={"ID":"228a9802-8837-425d-ab0f-72c79dbc4399","Type":"ContainerStarted","Data":"cc725151a862421d96ae05acaff9476fb868bff690d23458ce49f902077f31d6"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.590625 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" event={"ID":"8462a28b-a255-4ec7-9e85-cb98c6666e68","Type":"ContainerStarted","Data":"9b16f911a6a957a967446cfbf5c6ea2fc96992bba81ede127d18b8f442dc2429"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.590763 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.591818 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.593191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" event={"ID":"a3d20e62-3892-4e70-adad-754ac75dd1b9","Type":"ContainerStarted","Data":"4cdd61ac12c712c0524d25b2b8de59b3c6079f9b05a9b82135d32533f85e73f5"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.599506 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" event={"ID":"754cb6b5-90c5-4747-8ef0-28a7c6b02448","Type":"ContainerStarted","Data":"ab46a15714b5ce15e364bd8cc75cf44a9ee5d89f66a5b5ebfe6d8bc20e394dd8"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.599559 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" event={"ID":"754cb6b5-90c5-4747-8ef0-28a7c6b02448","Type":"ContainerStarted","Data":"89006b00b6440cabf3b88c2b6c0225f227dc1ac672abee5f596dbd980663a035"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.600773 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" podStartSLOduration=126.600758289 podStartE2EDuration="2m6.600758289s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:33.598887731 +0000 UTC m=+148.631971929" watchObservedRunningTime="2026-02-21 06:49:33.600758289 +0000 UTC m=+148.633842487" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.601136 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" event={"ID":"3b64a6e2-e14a-4de0-8630-e617a55b0794","Type":"ContainerStarted","Data":"af6314d7ea27c7b3a4e8144c625a0c4a8949018e65622aa6a209e0d3a8700c6e"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.608785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" event={"ID":"fc95478e-4574-4010-8833-5da4ec1987b3","Type":"ContainerStarted","Data":"6612b7ac98012272910d7d5a6a73fe2190842067b4aebead992ae2def82e662f"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.612374 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" event={"ID":"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8","Type":"ContainerStarted","Data":"7014e91500c417a828d3c717957a93a13e1d30d76a8161f8ed7815aaa79f7cdd"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.612648 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.615467 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.615571 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.115535098 +0000 UTC m=+149.148619296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.615680 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" event={"ID":"c7e86cdb-22d7-424c-a51e-61c1d7848655","Type":"ContainerStarted","Data":"e3b3788b1cfc79210c52d6900090330939103716741df826d89e6a6fc1c7526c"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.616888 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.618018 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.117995653 +0000 UTC m=+149.151079851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.624855 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" event={"ID":"64747ec7-e06d-406d-8c6e-332b1cbe179f","Type":"ContainerStarted","Data":"92b83e5234915f54f8b3c1fcbb8589558df18926ca198a27371d834a143de4a6"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.627405 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" event={"ID":"128520ce-9a27-454a-8394-efae24e83a7c","Type":"ContainerStarted","Data":"687382766b16f44a337c2f886e962776bbfbd10e7d5d0d3d4133f3ae54b0e6f3"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.632886 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k58x6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.632942 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.644418 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.644488 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.665722 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" podStartSLOduration=126.665698946 podStartE2EDuration="2m6.665698946s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:33.663392486 +0000 UTC m=+148.696476704" watchObservedRunningTime="2026-02-21 06:49:33.665698946 +0000 UTC m=+148.698783144" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.666246 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" podStartSLOduration=127.666226872 podStartE2EDuration="2m7.666226872s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:33.621860221 +0000 UTC m=+148.654944419" watchObservedRunningTime="2026-02-21 06:49:33.666226872 +0000 UTC m=+148.699311070" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.715331 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" podStartSLOduration=127.715308207 podStartE2EDuration="2m7.715308207s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:33.712576044 +0000 UTC m=+148.745660242" watchObservedRunningTime="2026-02-21 06:49:33.715308207 +0000 UTC m=+148.748392405" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.717894 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.718999 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.218973919 +0000 UTC m=+149.252058147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.741184 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" podStartSLOduration=126.741163485 podStartE2EDuration="2m6.741163485s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:33.73607597 +0000 UTC m=+148.769160178" watchObservedRunningTime="2026-02-21 06:49:33.741163485 +0000 UTC m=+148.774247683" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.822268 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.822858 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.322846393 +0000 UTC m=+149.355930591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.852850 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" podStartSLOduration=126.852827627 podStartE2EDuration="2m6.852827627s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:33.840375538 +0000 UTC m=+148.873459746" watchObservedRunningTime="2026-02-21 06:49:33.852827627 +0000 UTC m=+148.885911825" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.877636 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-kxrb8" podStartSLOduration=127.877619411 podStartE2EDuration="2m7.877619411s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:33.875802716 +0000 UTC m=+148.908886904" watchObservedRunningTime="2026-02-21 06:49:33.877619411 +0000 UTC m=+148.910703619" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.923925 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.924051 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.424034305 +0000 UTC m=+149.457118503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.924285 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.930054 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.430034469 +0000 UTC m=+149.463118667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.031208 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.031407 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.531377145 +0000 UTC m=+149.564461343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.037613 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.038253 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.538219734 +0000 UTC m=+149.571303932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.039074 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sps4j" podStartSLOduration=9.03906149 podStartE2EDuration="9.03906149s" podCreationTimestamp="2026-02-21 06:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:34.038856133 +0000 UTC m=+149.071940331" watchObservedRunningTime="2026-02-21 06:49:34.03906149 +0000 UTC m=+149.072145688" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.138722 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.139066 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.639050665 +0000 UTC m=+149.672134863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.181879 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" podStartSLOduration=127.181854229 podStartE2EDuration="2m7.181854229s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:34.142323855 +0000 UTC m=+149.175408053" watchObservedRunningTime="2026-02-21 06:49:34.181854229 +0000 UTC m=+149.214938427" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.182739 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" podStartSLOduration=127.182730435 podStartE2EDuration="2m7.182730435s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:34.178591559 +0000 UTC m=+149.211675757" watchObservedRunningTime="2026-02-21 06:49:34.182730435 +0000 UTC m=+149.215814633" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.208006 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" podStartSLOduration=127.207984645 podStartE2EDuration="2m7.207984645s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:34.206662934 +0000 UTC m=+149.239747142" watchObservedRunningTime="2026-02-21 06:49:34.207984645 +0000 UTC m=+149.241068843" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.235068 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" podStartSLOduration=128.23505109 podStartE2EDuration="2m8.23505109s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:34.23344512 +0000 UTC m=+149.266529318" watchObservedRunningTime="2026-02-21 06:49:34.23505109 +0000 UTC m=+149.268135288" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.239952 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.240231 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.740221477 +0000 UTC m=+149.773305675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.261071 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" podStartSLOduration=127.261051921 podStartE2EDuration="2m7.261051921s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:34.258113352 +0000 UTC m=+149.291197550" watchObservedRunningTime="2026-02-21 06:49:34.261051921 +0000 UTC m=+149.294136119" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.342708 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.343058 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.843044309 +0000 UTC m=+149.876128507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.350449 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:34 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:34 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:34 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.350673 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.444223 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.444776 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.944665024 +0000 UTC m=+149.977749222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.545513 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.545725 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:35.045697162 +0000 UTC m=+150.078781360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.545953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.546283 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:35.04627669 +0000 UTC m=+150.079360878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.587550 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.632430 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" event={"ID":"b7322fd9-681a-4d9a-83ac-9e74308f8747","Type":"ContainerStarted","Data":"7940b11b8dba4e357ae885f3a3afc0427bbc4e2d9e9987be05626b3ccc6b48d8"} Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.632472 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" event={"ID":"b7322fd9-681a-4d9a-83ac-9e74308f8747","Type":"ContainerStarted","Data":"9b80189fe1a1351f24d6e3b9938619406bfe72ad9b115cb545588ece5f5703dc"} Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.632941 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k58x6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.632975 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.633448 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxrb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.633488 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxrb8" podUID="8b5270e1-81d3-477a-96f9-b2cbc3090288" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.661262 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.661416 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:35.161399936 +0000 UTC m=+150.194484134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.661712 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.661892 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.661939 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.662078 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.662257 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.662854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.663072 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:35.163062717 +0000 UTC m=+150.196146915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.679130 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.679660 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.684673 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.710482 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.719564 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.729455 4820 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.763726 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.763953 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:35.263928199 +0000 UTC m=+150.297012397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.764171 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.764496 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:35.264488347 +0000 UTC m=+150.297572545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.811647 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.864664 4820 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-21T06:49:34.729481331Z","Handler":null,"Name":""} Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.866167 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.866524 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:35.366510425 +0000 UTC m=+150.399594623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.873835 4820 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.874053 4820 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.969856 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.979018 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.979075 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.013920 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:35 crc kubenswrapper[4820]: W0221 06:49:35.014208 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-9c2b9d563c6fe4130bbf0590596574ec8450926c54f8d3c329b4a2cb89fea926 WatchSource:0}: Error finding container 9c2b9d563c6fe4130bbf0590596574ec8450926c54f8d3c329b4a2cb89fea926: Status 404 returned error can't find the container with id 9c2b9d563c6fe4130bbf0590596574ec8450926c54f8d3c329b4a2cb89fea926 Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.071022 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.084473 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 06:49:35 crc kubenswrapper[4820]: W0221 06:49:35.157814 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-003804c1ebaf409969326a6cff478a03eab2e982377954ee06209858d09a9394 WatchSource:0}: Error finding container 003804c1ebaf409969326a6cff478a03eab2e982377954ee06209858d09a9394: Status 404 returned error can't find the container with id 003804c1ebaf409969326a6cff478a03eab2e982377954ee06209858d09a9394 Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.175951 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.185222 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dtbbw"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.186107 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.187976 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.192594 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dtbbw"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.273620 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-utilities\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.273954 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-catalog-content\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.274001 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d498c\" (UniqueName: \"kubernetes.io/projected/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-kube-api-access-d498c\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: W0221 06:49:35.288132 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-a412f37e96f8a3ded5a0bb3faeac7548e4b2b9694a1fbc475aaef8acf07dadd4 WatchSource:0}: Error finding container a412f37e96f8a3ded5a0bb3faeac7548e4b2b9694a1fbc475aaef8acf07dadd4: Status 404 returned error can't find the container with id a412f37e96f8a3ded5a0bb3faeac7548e4b2b9694a1fbc475aaef8acf07dadd4 Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.353473 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:35 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:35 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:35 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.353537 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.375116 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d498c\" (UniqueName: \"kubernetes.io/projected/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-kube-api-access-d498c\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.375222 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-utilities\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.375281 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-catalog-content\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.376120 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-catalog-content\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.376348 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-utilities\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.395486 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gt7zt"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.398090 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.399599 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d498c\" (UniqueName: \"kubernetes.io/projected/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-kube-api-access-d498c\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.403003 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.425541 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gt7zt"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.462443 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-566bt"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.476011 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-catalog-content\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.476092 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rknh8\" (UniqueName: \"kubernetes.io/projected/88718c88-6c0d-4eb1-af7e-14353e291e27-kube-api-access-rknh8\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.476136 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-utilities\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.498900 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.577493 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rknh8\" (UniqueName: \"kubernetes.io/projected/88718c88-6c0d-4eb1-af7e-14353e291e27-kube-api-access-rknh8\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.577556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-utilities\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.577594 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-catalog-content\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.578598 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-utilities\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.580538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-catalog-content\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.585758 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j6kgh"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.586717 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.596006 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j6kgh"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.606914 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rknh8\" (UniqueName: \"kubernetes.io/projected/88718c88-6c0d-4eb1-af7e-14353e291e27-kube-api-access-rknh8\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.647213 4820 generic.go:334] "Generic (PLEG): container finished" podID="0b009b00-dfa6-40ba-b629-608fc71dc429" containerID="d8fad70d0ffc026935b7857a9983aa7bde367f1ccdb48c593f103452b34e3bae" exitCode=0 Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.647454 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" event={"ID":"0b009b00-dfa6-40ba-b629-608fc71dc429","Type":"ContainerDied","Data":"d8fad70d0ffc026935b7857a9983aa7bde367f1ccdb48c593f103452b34e3bae"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.650974 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"08ab4f15aee047c9dfe96d9df48e491c33e5254834a87861b2d7297fa2e83b3e"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.651009 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9c2b9d563c6fe4130bbf0590596574ec8450926c54f8d3c329b4a2cb89fea926"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.652520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"22b90e97b32a7de4cd0ac2754111b813ee1bd717dfe1d8355254e7e0e59de193"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.652616 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"003804c1ebaf409969326a6cff478a03eab2e982377954ee06209858d09a9394"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.654641 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" event={"ID":"bcdc0b91-9179-44c7-9e5d-beb73c2b1110","Type":"ContainerStarted","Data":"b598b1cdbe0f9e05c67729eff4eb4e0b676f67f494000629fbc22161406ca524"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.678368 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw6fw\" (UniqueName: \"kubernetes.io/projected/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-kube-api-access-nw6fw\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.678438 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-catalog-content\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.678477 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-utilities\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.686657 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" event={"ID":"b7322fd9-681a-4d9a-83ac-9e74308f8747","Type":"ContainerStarted","Data":"cc53a98b7dd17aaa34ec2e68f5b3bb8b18e65633865661349722679980e38577"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.694092 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"fc5ed7dacecf277fa20a79668d542cbc147476a4b104a56ffc3afe3e30c60646"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.694129 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a412f37e96f8a3ded5a0bb3faeac7548e4b2b9694a1fbc475aaef8acf07dadd4"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.694412 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.706512 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.741321 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" podStartSLOduration=10.741299061 podStartE2EDuration="10.741299061s" podCreationTimestamp="2026-02-21 06:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:35.734842445 +0000 UTC m=+150.767926643" watchObservedRunningTime="2026-02-21 06:49:35.741299061 +0000 UTC m=+150.774383259" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.750474 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dtbbw"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.754689 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.779857 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw6fw\" (UniqueName: \"kubernetes.io/projected/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-kube-api-access-nw6fw\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.779940 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-catalog-content\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.779992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-utilities\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.787605 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-utilities\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.787769 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-catalog-content\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.791844 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wfwch"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.792917 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.802742 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wfwch"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.810610 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw6fw\" (UniqueName: \"kubernetes.io/projected/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-kube-api-access-nw6fw\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.906523 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.986017 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-utilities\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.986948 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7zsf\" (UniqueName: \"kubernetes.io/projected/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-kube-api-access-s7zsf\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.987004 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-catalog-content\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.088276 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-catalog-content\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.088346 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-utilities\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.088392 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7zsf\" (UniqueName: \"kubernetes.io/projected/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-kube-api-access-s7zsf\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.090166 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-catalog-content\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.090181 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-utilities\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.096893 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gt7zt"] Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.107536 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7zsf\" (UniqueName: \"kubernetes.io/projected/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-kube-api-access-s7zsf\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.115497 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.151403 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j6kgh"] Feb 21 06:49:36 crc kubenswrapper[4820]: W0221 06:49:36.162991 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dd96409_63d5_46a5_a9cb_a8e59f7fcce8.slice/crio-a1cf12a01af1b785eb3cc4bfef081e961870a39c601e9949c2b4118d5ac92237 WatchSource:0}: Error finding container a1cf12a01af1b785eb3cc4bfef081e961870a39c601e9949c2b4118d5ac92237: Status 404 returned error can't find the container with id a1cf12a01af1b785eb3cc4bfef081e961870a39c601e9949c2b4118d5ac92237 Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.314515 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wfwch"] Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.346621 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:36 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:36 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:36 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.346679 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:36 crc kubenswrapper[4820]: W0221 06:49:36.383836 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ad8f1e2_40cf_4c0b_aa35_d737387eca67.slice/crio-a2ae9f307855ab18cee074942ddc0bb885feb467bda6834f2adedf2f6ba48579 WatchSource:0}: Error finding container a2ae9f307855ab18cee074942ddc0bb885feb467bda6834f2adedf2f6ba48579: Status 404 returned error can't find the container with id a2ae9f307855ab18cee074942ddc0bb885feb467bda6834f2adedf2f6ba48579 Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.699147 4820 generic.go:334] "Generic (PLEG): container finished" podID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerID="38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff" exitCode=0 Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.699220 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6kgh" event={"ID":"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8","Type":"ContainerDied","Data":"38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.699482 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6kgh" event={"ID":"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8","Type":"ContainerStarted","Data":"a1cf12a01af1b785eb3cc4bfef081e961870a39c601e9949c2b4118d5ac92237"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.701050 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.701376 4820 generic.go:334] "Generic (PLEG): container finished" podID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerID="2e6da9bd9d95bf2fdd3f87da878f483c776c8f768d2149380d2d2bef1ce92197" exitCode=0 Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.701463 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfwch" event={"ID":"4ad8f1e2-40cf-4c0b-aa35-d737387eca67","Type":"ContainerDied","Data":"2e6da9bd9d95bf2fdd3f87da878f483c776c8f768d2149380d2d2bef1ce92197"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.701497 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfwch" event={"ID":"4ad8f1e2-40cf-4c0b-aa35-d737387eca67","Type":"ContainerStarted","Data":"a2ae9f307855ab18cee074942ddc0bb885feb467bda6834f2adedf2f6ba48579"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.703463 4820 generic.go:334] "Generic (PLEG): container finished" podID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerID="74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd" exitCode=0 Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.703557 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt7zt" event={"ID":"88718c88-6c0d-4eb1-af7e-14353e291e27","Type":"ContainerDied","Data":"74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.703588 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt7zt" event={"ID":"88718c88-6c0d-4eb1-af7e-14353e291e27","Type":"ContainerStarted","Data":"04fd41dbab4d8a603151ac33844cbba8ff658b873d854bbf23d1ef0e3e50dc39"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.705016 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" event={"ID":"bcdc0b91-9179-44c7-9e5d-beb73c2b1110","Type":"ContainerStarted","Data":"8048ccd2f14f2f271de65f71a2e6fa5f3c462cfe55114a86890015f00eed03c6"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.705127 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.706181 4820 generic.go:334] "Generic (PLEG): container finished" podID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerID="7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38" exitCode=0 Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.706744 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtbbw" event={"ID":"9c9aa300-090c-44cb-91ed-1c1bdc44cbae","Type":"ContainerDied","Data":"7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.706770 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtbbw" event={"ID":"9c9aa300-090c-44cb-91ed-1c1bdc44cbae","Type":"ContainerStarted","Data":"1186b29bef767e21ec1c625c6cc6253779166154a0a774141ac1f83ba9af24e6"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.811031 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" podStartSLOduration=130.811013225 podStartE2EDuration="2m10.811013225s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:36.810298864 +0000 UTC m=+151.843383082" watchObservedRunningTime="2026-02-21 06:49:36.811013225 +0000 UTC m=+151.844097423" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.952158 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.100677 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b009b00-dfa6-40ba-b629-608fc71dc429-secret-volume\") pod \"0b009b00-dfa6-40ba-b629-608fc71dc429\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.100843 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b009b00-dfa6-40ba-b629-608fc71dc429-config-volume\") pod \"0b009b00-dfa6-40ba-b629-608fc71dc429\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.100866 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmnkd\" (UniqueName: \"kubernetes.io/projected/0b009b00-dfa6-40ba-b629-608fc71dc429-kube-api-access-hmnkd\") pod \"0b009b00-dfa6-40ba-b629-608fc71dc429\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.101553 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b009b00-dfa6-40ba-b629-608fc71dc429-config-volume" (OuterVolumeSpecName: "config-volume") pod "0b009b00-dfa6-40ba-b629-608fc71dc429" (UID: "0b009b00-dfa6-40ba-b629-608fc71dc429"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.105607 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b009b00-dfa6-40ba-b629-608fc71dc429-kube-api-access-hmnkd" (OuterVolumeSpecName: "kube-api-access-hmnkd") pod "0b009b00-dfa6-40ba-b629-608fc71dc429" (UID: "0b009b00-dfa6-40ba-b629-608fc71dc429"). InnerVolumeSpecName "kube-api-access-hmnkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.106336 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b009b00-dfa6-40ba-b629-608fc71dc429-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0b009b00-dfa6-40ba-b629-608fc71dc429" (UID: "0b009b00-dfa6-40ba-b629-608fc71dc429"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.183815 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wfq7z"] Feb 21 06:49:37 crc kubenswrapper[4820]: E0221 06:49:37.184028 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b009b00-dfa6-40ba-b629-608fc71dc429" containerName="collect-profiles" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.184162 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b009b00-dfa6-40ba-b629-608fc71dc429" containerName="collect-profiles" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.184288 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b009b00-dfa6-40ba-b629-608fc71dc429" containerName="collect-profiles" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.184964 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.186595 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.197126 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfq7z"] Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.203216 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b009b00-dfa6-40ba-b629-608fc71dc429-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.203298 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmnkd\" (UniqueName: \"kubernetes.io/projected/0b009b00-dfa6-40ba-b629-608fc71dc429-kube-api-access-hmnkd\") on node \"crc\" DevicePath \"\"" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.203312 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b009b00-dfa6-40ba-b629-608fc71dc429-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.263788 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.304120 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-catalog-content\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.304300 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-utilities\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.304326 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7jnw\" (UniqueName: \"kubernetes.io/projected/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-kube-api-access-b7jnw\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.347965 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:37 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:37 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:37 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.348345 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.408960 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-catalog-content\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.409084 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-utilities\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.409103 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7jnw\" (UniqueName: \"kubernetes.io/projected/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-kube-api-access-b7jnw\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.411142 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-catalog-content\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.421728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-utilities\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.436912 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7jnw\" (UniqueName: \"kubernetes.io/projected/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-kube-api-access-b7jnw\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.510252 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.583553 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fwm8t"] Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.590534 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.596269 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwm8t"] Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.712852 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-utilities\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.712930 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-catalog-content\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.713155 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcr8k\" (UniqueName: \"kubernetes.io/projected/328474dd-edf9-4d6b-b9d9-50f591176ce1-kube-api-access-kcr8k\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.725078 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.725448 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" event={"ID":"0b009b00-dfa6-40ba-b629-608fc71dc429","Type":"ContainerDied","Data":"88715bb258d3aa108b4b19be2aa570b41fc0e79301b3a41e96839d1839127be2"} Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.725471 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88715bb258d3aa108b4b19be2aa570b41fc0e79301b3a41e96839d1839127be2" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.751959 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfq7z"] Feb 21 06:49:37 crc kubenswrapper[4820]: W0221 06:49:37.762295 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62bc411a_7f2e_4a7c_8a27_d758d4716f0e.slice/crio-0313a503380cf7228ea4e19fb74b8d644a2d0e9f2e03718d0432d7e8be1cd955 WatchSource:0}: Error finding container 0313a503380cf7228ea4e19fb74b8d644a2d0e9f2e03718d0432d7e8be1cd955: Status 404 returned error can't find the container with id 0313a503380cf7228ea4e19fb74b8d644a2d0e9f2e03718d0432d7e8be1cd955 Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.814143 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-catalog-content\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.814501 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcr8k\" (UniqueName: \"kubernetes.io/projected/328474dd-edf9-4d6b-b9d9-50f591176ce1-kube-api-access-kcr8k\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.814534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-utilities\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.815076 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-catalog-content\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.815709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-utilities\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.843129 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcr8k\" (UniqueName: \"kubernetes.io/projected/328474dd-edf9-4d6b-b9d9-50f591176ce1-kube-api-access-kcr8k\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.912958 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.933650 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.933747 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.936166 4820 patch_prober.go:28] interesting pod/console-f9d7485db-cgbzf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.936209 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cgbzf" podUID="18b46a58-b11c-4760-bd38-1c875c4ecf21" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.176132 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.177556 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.188046 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.249171 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwm8t"] Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.265714 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.266460 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.268252 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.268582 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 21 06:49:38 crc kubenswrapper[4820]: W0221 06:49:38.279301 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod328474dd_edf9_4d6b_b9d9_50f591176ce1.slice/crio-f8a5b6747be5d1dc78ec352a1da0ebd534f07dbe8949b884afe0b97aa6675dad WatchSource:0}: Error finding container f8a5b6747be5d1dc78ec352a1da0ebd534f07dbe8949b884afe0b97aa6675dad: Status 404 returned error can't find the container with id f8a5b6747be5d1dc78ec352a1da0ebd534f07dbe8949b884afe0b97aa6675dad Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.287637 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.342757 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.346209 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:38 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:38 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:38 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.346338 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.423723 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b7cabf-7765-4789-90b7-e8dabb197a7e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.423764 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b7cabf-7765-4789-90b7-e8dabb197a7e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.485330 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.525264 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b7cabf-7765-4789-90b7-e8dabb197a7e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.525475 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b7cabf-7765-4789-90b7-e8dabb197a7e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.525793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b7cabf-7765-4789-90b7-e8dabb197a7e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.547859 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b7cabf-7765-4789-90b7-e8dabb197a7e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.584056 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zcn45"] Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.585567 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.599303 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.601578 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.605107 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zcn45"] Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.694536 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxrb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.694600 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxrb8" podUID="8b5270e1-81d3-477a-96f9-b2cbc3090288" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.694799 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxrb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.694869 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kxrb8" podUID="8b5270e1-81d3-477a-96f9-b2cbc3090288" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.727630 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snxb4\" (UniqueName: \"kubernetes.io/projected/04595c48-2a70-4760-8e24-5266735b9e82-kube-api-access-snxb4\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.727785 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-catalog-content\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.727851 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-utilities\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.739647 4820 generic.go:334] "Generic (PLEG): container finished" podID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerID="ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03" exitCode=0 Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.739697 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfq7z" event={"ID":"62bc411a-7f2e-4a7c-8a27-d758d4716f0e","Type":"ContainerDied","Data":"ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03"} Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.739720 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfq7z" event={"ID":"62bc411a-7f2e-4a7c-8a27-d758d4716f0e","Type":"ContainerStarted","Data":"0313a503380cf7228ea4e19fb74b8d644a2d0e9f2e03718d0432d7e8be1cd955"} Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.745352 4820 generic.go:334] "Generic (PLEG): container finished" podID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerID="f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c" exitCode=0 Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.745552 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwm8t" event={"ID":"328474dd-edf9-4d6b-b9d9-50f591176ce1","Type":"ContainerDied","Data":"f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c"} Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.745589 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwm8t" event={"ID":"328474dd-edf9-4d6b-b9d9-50f591176ce1","Type":"ContainerStarted","Data":"f8a5b6747be5d1dc78ec352a1da0ebd534f07dbe8949b884afe0b97aa6675dad"} Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.773435 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.834046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snxb4\" (UniqueName: \"kubernetes.io/projected/04595c48-2a70-4760-8e24-5266735b9e82-kube-api-access-snxb4\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.834179 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-catalog-content\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.834220 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-utilities\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.834633 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-utilities\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.836635 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-catalog-content\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.895295 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snxb4\" (UniqueName: \"kubernetes.io/projected/04595c48-2a70-4760-8e24-5266735b9e82-kube-api-access-snxb4\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.926646 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.001454 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-568r2"] Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.002803 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.030172 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-568r2"] Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.048039 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.138788 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzzp8\" (UniqueName: \"kubernetes.io/projected/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-kube-api-access-nzzp8\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.138851 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-catalog-content\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.138916 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-utilities\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.231601 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zcn45"] Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.240048 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzzp8\" (UniqueName: \"kubernetes.io/projected/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-kube-api-access-nzzp8\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.240091 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-catalog-content\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.240131 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-utilities\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.241043 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-catalog-content\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.241175 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-utilities\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: W0221 06:49:39.245147 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04595c48_2a70_4760_8e24_5266735b9e82.slice/crio-85b548c074e9ee1f2673409e289d81bf0908133ef92294aa7291d120aa6cc445 WatchSource:0}: Error finding container 85b548c074e9ee1f2673409e289d81bf0908133ef92294aa7291d120aa6cc445: Status 404 returned error can't find the container with id 85b548c074e9ee1f2673409e289d81bf0908133ef92294aa7291d120aa6cc445 Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.259491 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzzp8\" (UniqueName: \"kubernetes.io/projected/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-kube-api-access-nzzp8\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.326734 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.346847 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:39 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:39 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:39 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.347083 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.542901 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-568r2"] Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.753882 4820 generic.go:334] "Generic (PLEG): container finished" podID="04595c48-2a70-4760-8e24-5266735b9e82" containerID="136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597" exitCode=0 Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.753987 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcn45" event={"ID":"04595c48-2a70-4760-8e24-5266735b9e82","Type":"ContainerDied","Data":"136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597"} Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.754050 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcn45" event={"ID":"04595c48-2a70-4760-8e24-5266735b9e82","Type":"ContainerStarted","Data":"85b548c074e9ee1f2673409e289d81bf0908133ef92294aa7291d120aa6cc445"} Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.756004 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42b7cabf-7765-4789-90b7-e8dabb197a7e","Type":"ContainerStarted","Data":"cc8e75f91419dd82bb896e1b408dbb84cd5bfe72d98d985cd4ed2107d1595d4f"} Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.756282 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42b7cabf-7765-4789-90b7-e8dabb197a7e","Type":"ContainerStarted","Data":"289f2fd2006b7edcffe3d65b5e2dac2457318781b98243f661da0ba8b19cf53d"} Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.764408 4820 generic.go:334] "Generic (PLEG): container finished" podID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerID="c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83" exitCode=0 Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.764582 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-568r2" event={"ID":"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6","Type":"ContainerDied","Data":"c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83"} Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.764644 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-568r2" event={"ID":"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6","Type":"ContainerStarted","Data":"a45f177e1207be3c08153b6e35e267a2cf4dd2c4d9944405c0f459a97610a520"} Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.786692 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.786673937 podStartE2EDuration="1.786673937s" podCreationTimestamp="2026-02-21 06:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:39.785636255 +0000 UTC m=+154.818720463" watchObservedRunningTime="2026-02-21 06:49:39.786673937 +0000 UTC m=+154.819758145" Feb 21 06:49:40 crc kubenswrapper[4820]: I0221 06:49:40.346505 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:40 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:40 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:40 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:40 crc kubenswrapper[4820]: I0221 06:49:40.346569 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:40 crc kubenswrapper[4820]: I0221 06:49:40.821151 4820 generic.go:334] "Generic (PLEG): container finished" podID="42b7cabf-7765-4789-90b7-e8dabb197a7e" containerID="cc8e75f91419dd82bb896e1b408dbb84cd5bfe72d98d985cd4ed2107d1595d4f" exitCode=0 Feb 21 06:49:40 crc kubenswrapper[4820]: I0221 06:49:40.821229 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42b7cabf-7765-4789-90b7-e8dabb197a7e","Type":"ContainerDied","Data":"cc8e75f91419dd82bb896e1b408dbb84cd5bfe72d98d985cd4ed2107d1595d4f"} Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.137803 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.138638 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.146838 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.147292 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.150763 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.276932 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcce7871-a63c-4991-b931-4ab94a014424-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.277030 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcce7871-a63c-4991-b931-4ab94a014424-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.351443 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.356184 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.378955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcce7871-a63c-4991-b931-4ab94a014424-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.379027 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcce7871-a63c-4991-b931-4ab94a014424-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.379052 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcce7871-a63c-4991-b931-4ab94a014424-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.417034 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcce7871-a63c-4991-b931-4ab94a014424-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.478080 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.861061 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.124290 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.193129 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b7cabf-7765-4789-90b7-e8dabb197a7e-kubelet-dir\") pod \"42b7cabf-7765-4789-90b7-e8dabb197a7e\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.193229 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b7cabf-7765-4789-90b7-e8dabb197a7e-kube-api-access\") pod \"42b7cabf-7765-4789-90b7-e8dabb197a7e\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.205291 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42b7cabf-7765-4789-90b7-e8dabb197a7e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "42b7cabf-7765-4789-90b7-e8dabb197a7e" (UID: "42b7cabf-7765-4789-90b7-e8dabb197a7e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.207347 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b7cabf-7765-4789-90b7-e8dabb197a7e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "42b7cabf-7765-4789-90b7-e8dabb197a7e" (UID: "42b7cabf-7765-4789-90b7-e8dabb197a7e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.300844 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b7cabf-7765-4789-90b7-e8dabb197a7e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.300871 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b7cabf-7765-4789-90b7-e8dabb197a7e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.848445 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dcce7871-a63c-4991-b931-4ab94a014424","Type":"ContainerStarted","Data":"aafe0dd8e20a17f081effbd54daabbca3eb4fe66f8c74d667f591877fb737175"} Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.850334 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42b7cabf-7765-4789-90b7-e8dabb197a7e","Type":"ContainerDied","Data":"289f2fd2006b7edcffe3d65b5e2dac2457318781b98243f661da0ba8b19cf53d"} Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.850374 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="289f2fd2006b7edcffe3d65b5e2dac2457318781b98243f661da0ba8b19cf53d" Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.850422 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:43 crc kubenswrapper[4820]: I0221 06:49:43.520571 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:43 crc kubenswrapper[4820]: I0221 06:49:43.816841 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:49:43 crc kubenswrapper[4820]: I0221 06:49:43.817225 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:49:43 crc kubenswrapper[4820]: I0221 06:49:43.866500 4820 generic.go:334] "Generic (PLEG): container finished" podID="dcce7871-a63c-4991-b931-4ab94a014424" containerID="7a77cd6f06b486924607882b7871c5c15d50eaacfd23fbd83e1dcdeb521fd47b" exitCode=0 Feb 21 06:49:43 crc kubenswrapper[4820]: I0221 06:49:43.866564 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dcce7871-a63c-4991-b931-4ab94a014424","Type":"ContainerDied","Data":"7a77cd6f06b486924607882b7871c5c15d50eaacfd23fbd83e1dcdeb521fd47b"} Feb 21 06:49:47 crc kubenswrapper[4820]: I0221 06:49:47.940125 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:47 crc kubenswrapper[4820]: I0221 06:49:47.945866 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:48 crc kubenswrapper[4820]: I0221 06:49:48.697893 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-kxrb8" Feb 21 06:49:49 crc kubenswrapper[4820]: I0221 06:49:49.638689 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:49 crc kubenswrapper[4820]: I0221 06:49:49.660224 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:49 crc kubenswrapper[4820]: I0221 06:49:49.816674 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:51 crc kubenswrapper[4820]: I0221 06:49:51.961857 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dcce7871-a63c-4991-b931-4ab94a014424","Type":"ContainerDied","Data":"aafe0dd8e20a17f081effbd54daabbca3eb4fe66f8c74d667f591877fb737175"} Feb 21 06:49:51 crc kubenswrapper[4820]: I0221 06:49:51.962157 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aafe0dd8e20a17f081effbd54daabbca3eb4fe66f8c74d667f591877fb737175" Feb 21 06:49:51 crc kubenswrapper[4820]: I0221 06:49:51.975524 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:52 crc kubenswrapper[4820]: I0221 06:49:52.066747 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcce7871-a63c-4991-b931-4ab94a014424-kube-api-access\") pod \"dcce7871-a63c-4991-b931-4ab94a014424\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " Feb 21 06:49:52 crc kubenswrapper[4820]: I0221 06:49:52.066870 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcce7871-a63c-4991-b931-4ab94a014424-kubelet-dir\") pod \"dcce7871-a63c-4991-b931-4ab94a014424\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " Feb 21 06:49:52 crc kubenswrapper[4820]: I0221 06:49:52.066973 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcce7871-a63c-4991-b931-4ab94a014424-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dcce7871-a63c-4991-b931-4ab94a014424" (UID: "dcce7871-a63c-4991-b931-4ab94a014424"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:49:52 crc kubenswrapper[4820]: I0221 06:49:52.067120 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcce7871-a63c-4991-b931-4ab94a014424-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:49:52 crc kubenswrapper[4820]: I0221 06:49:52.074915 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcce7871-a63c-4991-b931-4ab94a014424-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dcce7871-a63c-4991-b931-4ab94a014424" (UID: "dcce7871-a63c-4991-b931-4ab94a014424"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:49:52 crc kubenswrapper[4820]: I0221 06:49:52.168231 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcce7871-a63c-4991-b931-4ab94a014424-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:49:52 crc kubenswrapper[4820]: I0221 06:49:52.968590 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:53 crc kubenswrapper[4820]: I0221 06:49:53.996838 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bt6wj"] Feb 21 06:49:55 crc kubenswrapper[4820]: I0221 06:49:55.185465 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:50:02 crc kubenswrapper[4820]: I0221 06:50:02.049035 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" event={"ID":"a4537dd3-6e3b-481a-9f90-668020b5558b","Type":"ContainerStarted","Data":"cfd19c96c78f13114fafe6e2f8d22f644d978e4f44d89e25f82eaeb6ebd0e9a7"} Feb 21 06:50:05 crc kubenswrapper[4820]: I0221 06:50:05.063893 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt7zt" event={"ID":"88718c88-6c0d-4eb1-af7e-14353e291e27","Type":"ContainerStarted","Data":"ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79"} Feb 21 06:50:05 crc kubenswrapper[4820]: I0221 06:50:05.066098 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtbbw" event={"ID":"9c9aa300-090c-44cb-91ed-1c1bdc44cbae","Type":"ContainerStarted","Data":"b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72"} Feb 21 06:50:05 crc kubenswrapper[4820]: I0221 06:50:05.069504 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcn45" event={"ID":"04595c48-2a70-4760-8e24-5266735b9e82","Type":"ContainerStarted","Data":"dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8"} Feb 21 06:50:05 crc kubenswrapper[4820]: I0221 06:50:05.071223 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfwch" event={"ID":"4ad8f1e2-40cf-4c0b-aa35-d737387eca67","Type":"ContainerStarted","Data":"c0facf7a97d78362dd30b0aa85074bfc5ee3fe6f4603ba8e654f5fe8d83bb24e"} Feb 21 06:50:05 crc kubenswrapper[4820]: I0221 06:50:05.086414 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfq7z" event={"ID":"62bc411a-7f2e-4a7c-8a27-d758d4716f0e","Type":"ContainerStarted","Data":"a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac"} Feb 21 06:50:05 crc kubenswrapper[4820]: I0221 06:50:05.089147 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwm8t" event={"ID":"328474dd-edf9-4d6b-b9d9-50f591176ce1","Type":"ContainerStarted","Data":"0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e"} Feb 21 06:50:05 crc kubenswrapper[4820]: I0221 06:50:05.097717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" event={"ID":"a4537dd3-6e3b-481a-9f90-668020b5558b","Type":"ContainerStarted","Data":"898f87566cf619682b2563278404e107d6e21fdef12135bdea44f107415f9ea9"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.104161 4820 generic.go:334] "Generic (PLEG): container finished" podID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerID="0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.104232 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwm8t" event={"ID":"328474dd-edf9-4d6b-b9d9-50f591176ce1","Type":"ContainerDied","Data":"0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.106523 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" event={"ID":"a4537dd3-6e3b-481a-9f90-668020b5558b","Type":"ContainerStarted","Data":"f48fac4471614c75906d1467afec1707c20e35aa6c7b0ef5eb08a48d0d219955"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.108634 4820 generic.go:334] "Generic (PLEG): container finished" podID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerID="ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.108670 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt7zt" event={"ID":"88718c88-6c0d-4eb1-af7e-14353e291e27","Type":"ContainerDied","Data":"ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.110893 4820 generic.go:334] "Generic (PLEG): container finished" podID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerID="3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.110949 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-568r2" event={"ID":"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6","Type":"ContainerDied","Data":"3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.112515 4820 generic.go:334] "Generic (PLEG): container finished" podID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerID="a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.112542 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfq7z" event={"ID":"62bc411a-7f2e-4a7c-8a27-d758d4716f0e","Type":"ContainerDied","Data":"a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.117905 4820 generic.go:334] "Generic (PLEG): container finished" podID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerID="b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.117964 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtbbw" event={"ID":"9c9aa300-090c-44cb-91ed-1c1bdc44cbae","Type":"ContainerDied","Data":"b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.127689 4820 generic.go:334] "Generic (PLEG): container finished" podID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerID="f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.128114 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6kgh" event={"ID":"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8","Type":"ContainerDied","Data":"f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.132011 4820 generic.go:334] "Generic (PLEG): container finished" podID="04595c48-2a70-4760-8e24-5266735b9e82" containerID="dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.132066 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcn45" event={"ID":"04595c48-2a70-4760-8e24-5266735b9e82","Type":"ContainerDied","Data":"dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.135133 4820 generic.go:334] "Generic (PLEG): container finished" podID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerID="c0facf7a97d78362dd30b0aa85074bfc5ee3fe6f4603ba8e654f5fe8d83bb24e" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.135154 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfwch" event={"ID":"4ad8f1e2-40cf-4c0b-aa35-d737387eca67","Type":"ContainerDied","Data":"c0facf7a97d78362dd30b0aa85074bfc5ee3fe6f4603ba8e654f5fe8d83bb24e"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.216702 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bt6wj" podStartSLOduration=160.216678888 podStartE2EDuration="2m40.216678888s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:50:06.212469529 +0000 UTC m=+181.245553767" watchObservedRunningTime="2026-02-21 06:50:06.216678888 +0000 UTC m=+181.249763086" Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.424681 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f6j4c"] Feb 21 06:50:08 crc kubenswrapper[4820]: I0221 06:50:08.420354 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:50:10 crc kubenswrapper[4820]: I0221 06:50:10.159042 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtbbw" event={"ID":"9c9aa300-090c-44cb-91ed-1c1bdc44cbae","Type":"ContainerStarted","Data":"da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9"} Feb 21 06:50:11 crc kubenswrapper[4820]: I0221 06:50:11.183115 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dtbbw" podStartSLOduration=3.680531012 podStartE2EDuration="36.183100119s" podCreationTimestamp="2026-02-21 06:49:35 +0000 UTC" firstStartedPulling="2026-02-21 06:49:36.707508992 +0000 UTC m=+151.740593190" lastFinishedPulling="2026-02-21 06:50:09.210078099 +0000 UTC m=+184.243162297" observedRunningTime="2026-02-21 06:50:11.180012185 +0000 UTC m=+186.213096383" watchObservedRunningTime="2026-02-21 06:50:11.183100119 +0000 UTC m=+186.216184317" Feb 21 06:50:12 crc kubenswrapper[4820]: I0221 06:50:12.170453 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfq7z" event={"ID":"62bc411a-7f2e-4a7c-8a27-d758d4716f0e","Type":"ContainerStarted","Data":"10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e"} Feb 21 06:50:12 crc kubenswrapper[4820]: I0221 06:50:12.187451 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wfq7z" podStartSLOduration=2.769327039 podStartE2EDuration="35.187435182s" podCreationTimestamp="2026-02-21 06:49:37 +0000 UTC" firstStartedPulling="2026-02-21 06:49:38.741127739 +0000 UTC m=+153.774211937" lastFinishedPulling="2026-02-21 06:50:11.159235862 +0000 UTC m=+186.192320080" observedRunningTime="2026-02-21 06:50:12.183627707 +0000 UTC m=+187.216711905" watchObservedRunningTime="2026-02-21 06:50:12.187435182 +0000 UTC m=+187.220519380" Feb 21 06:50:13 crc kubenswrapper[4820]: I0221 06:50:13.816261 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:50:13 crc kubenswrapper[4820]: I0221 06:50:13.817160 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:50:14 crc kubenswrapper[4820]: I0221 06:50:14.189190 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-568r2" event={"ID":"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6","Type":"ContainerStarted","Data":"c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf"} Feb 21 06:50:14 crc kubenswrapper[4820]: I0221 06:50:14.198793 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfwch" event={"ID":"4ad8f1e2-40cf-4c0b-aa35-d737387eca67","Type":"ContainerStarted","Data":"11d9aa8adc2d52eb3d37fd794491ef7312641aeaa02431dcdb7b8157f4bf8b0f"} Feb 21 06:50:14 crc kubenswrapper[4820]: I0221 06:50:14.244940 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-568r2" podStartSLOduration=2.700989918 podStartE2EDuration="36.244921415s" podCreationTimestamp="2026-02-21 06:49:38 +0000 UTC" firstStartedPulling="2026-02-21 06:49:39.767165823 +0000 UTC m=+154.800250021" lastFinishedPulling="2026-02-21 06:50:13.31109732 +0000 UTC m=+188.344181518" observedRunningTime="2026-02-21 06:50:14.222205984 +0000 UTC m=+189.255290182" watchObservedRunningTime="2026-02-21 06:50:14.244921415 +0000 UTC m=+189.278005613" Feb 21 06:50:14 crc kubenswrapper[4820]: I0221 06:50:14.718962 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:50:14 crc kubenswrapper[4820]: I0221 06:50:14.738222 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wfwch" podStartSLOduration=2.412120436 podStartE2EDuration="39.738206261s" podCreationTimestamp="2026-02-21 06:49:35 +0000 UTC" firstStartedPulling="2026-02-21 06:49:36.702571363 +0000 UTC m=+151.735655561" lastFinishedPulling="2026-02-21 06:50:14.028657188 +0000 UTC m=+189.061741386" observedRunningTime="2026-02-21 06:50:14.242223323 +0000 UTC m=+189.275307521" watchObservedRunningTime="2026-02-21 06:50:14.738206261 +0000 UTC m=+189.771290459" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.207607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt7zt" event={"ID":"88718c88-6c0d-4eb1-af7e-14353e291e27","Type":"ContainerStarted","Data":"88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8"} Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.209716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6kgh" event={"ID":"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8","Type":"ContainerStarted","Data":"af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8"} Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.215186 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcn45" event={"ID":"04595c48-2a70-4760-8e24-5266735b9e82","Type":"ContainerStarted","Data":"5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1"} Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.217420 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwm8t" event={"ID":"328474dd-edf9-4d6b-b9d9-50f591176ce1","Type":"ContainerStarted","Data":"d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b"} Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.233356 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gt7zt" podStartSLOduration=2.845149547 podStartE2EDuration="40.233339474s" podCreationTimestamp="2026-02-21 06:49:35 +0000 UTC" firstStartedPulling="2026-02-21 06:49:36.704334086 +0000 UTC m=+151.737418284" lastFinishedPulling="2026-02-21 06:50:14.092524013 +0000 UTC m=+189.125608211" observedRunningTime="2026-02-21 06:50:15.228742964 +0000 UTC m=+190.261827162" watchObservedRunningTime="2026-02-21 06:50:15.233339474 +0000 UTC m=+190.266423672" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.253424 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fwm8t" podStartSLOduration=2.968959189 podStartE2EDuration="38.253403024s" podCreationTimestamp="2026-02-21 06:49:37 +0000 UTC" firstStartedPulling="2026-02-21 06:49:38.749408211 +0000 UTC m=+153.782492409" lastFinishedPulling="2026-02-21 06:50:14.033852046 +0000 UTC m=+189.066936244" observedRunningTime="2026-02-21 06:50:15.249889268 +0000 UTC m=+190.282973476" watchObservedRunningTime="2026-02-21 06:50:15.253403024 +0000 UTC m=+190.286487222" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.269214 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j6kgh" podStartSLOduration=2.773873706 podStartE2EDuration="40.269194486s" podCreationTimestamp="2026-02-21 06:49:35 +0000 UTC" firstStartedPulling="2026-02-21 06:49:36.700817119 +0000 UTC m=+151.733901317" lastFinishedPulling="2026-02-21 06:50:14.196137899 +0000 UTC m=+189.229222097" observedRunningTime="2026-02-21 06:50:15.264339908 +0000 UTC m=+190.297424116" watchObservedRunningTime="2026-02-21 06:50:15.269194486 +0000 UTC m=+190.302278684" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.288460 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zcn45" podStartSLOduration=2.938700858 podStartE2EDuration="37.288442652s" podCreationTimestamp="2026-02-21 06:49:38 +0000 UTC" firstStartedPulling="2026-02-21 06:49:39.756758045 +0000 UTC m=+154.789842243" lastFinishedPulling="2026-02-21 06:50:14.106499849 +0000 UTC m=+189.139584037" observedRunningTime="2026-02-21 06:50:15.285046669 +0000 UTC m=+190.318130867" watchObservedRunningTime="2026-02-21 06:50:15.288442652 +0000 UTC m=+190.321526860" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.500021 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.500068 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.687699 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.755552 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.755592 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.906973 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.907026 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:50:16 crc kubenswrapper[4820]: I0221 06:50:16.115934 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:50:16 crc kubenswrapper[4820]: I0221 06:50:16.115978 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:50:16 crc kubenswrapper[4820]: I0221 06:50:16.273749 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:50:16 crc kubenswrapper[4820]: I0221 06:50:16.800190 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gt7zt" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="registry-server" probeResult="failure" output=< Feb 21 06:50:16 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 06:50:16 crc kubenswrapper[4820]: > Feb 21 06:50:16 crc kubenswrapper[4820]: I0221 06:50:16.940096 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-j6kgh" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="registry-server" probeResult="failure" output=< Feb 21 06:50:16 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 06:50:16 crc kubenswrapper[4820]: > Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.152385 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wfwch" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="registry-server" probeResult="failure" output=< Feb 21 06:50:17 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 06:50:17 crc kubenswrapper[4820]: > Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.511320 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.512183 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.549039 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.913446 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.913516 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.940164 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 21 06:50:17 crc kubenswrapper[4820]: E0221 06:50:17.949261 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcce7871-a63c-4991-b931-4ab94a014424" containerName="pruner" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.949296 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcce7871-a63c-4991-b931-4ab94a014424" containerName="pruner" Feb 21 06:50:17 crc kubenswrapper[4820]: E0221 06:50:17.949321 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b7cabf-7765-4789-90b7-e8dabb197a7e" containerName="pruner" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.949330 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b7cabf-7765-4789-90b7-e8dabb197a7e" containerName="pruner" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.949851 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcce7871-a63c-4991-b931-4ab94a014424" containerName="pruner" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.949887 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b7cabf-7765-4789-90b7-e8dabb197a7e" containerName="pruner" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.950639 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.956060 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.956163 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.968756 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.992994 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.085026 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/155e5f64-211d-4b89-b8dc-48f3edf80891-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.085174 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/155e5f64-211d-4b89-b8dc-48f3edf80891-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.186029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/155e5f64-211d-4b89-b8dc-48f3edf80891-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.186281 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/155e5f64-211d-4b89-b8dc-48f3edf80891-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.186606 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/155e5f64-211d-4b89-b8dc-48f3edf80891-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.209441 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/155e5f64-211d-4b89-b8dc-48f3edf80891-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.275298 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.288150 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.710465 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.928480 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.928529 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:50:19 crc kubenswrapper[4820]: I0221 06:50:19.240467 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"155e5f64-211d-4b89-b8dc-48f3edf80891","Type":"ContainerStarted","Data":"b525acd930296f7cd7932d46894befc2e5a4f56236f3ccce00946c5e57d66920"} Feb 21 06:50:19 crc kubenswrapper[4820]: I0221 06:50:19.240775 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"155e5f64-211d-4b89-b8dc-48f3edf80891","Type":"ContainerStarted","Data":"3bb169ae922bc05641cc6b9ccc5dce61a3e600355451aaf7bc6da01715a9061b"} Feb 21 06:50:19 crc kubenswrapper[4820]: I0221 06:50:19.254991 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.254971416 podStartE2EDuration="2.254971416s" podCreationTimestamp="2026-02-21 06:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:50:19.254788671 +0000 UTC m=+194.287872869" watchObservedRunningTime="2026-02-21 06:50:19.254971416 +0000 UTC m=+194.288055614" Feb 21 06:50:19 crc kubenswrapper[4820]: I0221 06:50:19.328081 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:50:19 crc kubenswrapper[4820]: I0221 06:50:19.328145 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:50:19 crc kubenswrapper[4820]: I0221 06:50:19.975797 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zcn45" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="registry-server" probeResult="failure" output=< Feb 21 06:50:19 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 06:50:19 crc kubenswrapper[4820]: > Feb 21 06:50:20 crc kubenswrapper[4820]: I0221 06:50:20.246690 4820 generic.go:334] "Generic (PLEG): container finished" podID="155e5f64-211d-4b89-b8dc-48f3edf80891" containerID="b525acd930296f7cd7932d46894befc2e5a4f56236f3ccce00946c5e57d66920" exitCode=0 Feb 21 06:50:20 crc kubenswrapper[4820]: I0221 06:50:20.246764 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"155e5f64-211d-4b89-b8dc-48f3edf80891","Type":"ContainerDied","Data":"b525acd930296f7cd7932d46894befc2e5a4f56236f3ccce00946c5e57d66920"} Feb 21 06:50:20 crc kubenswrapper[4820]: I0221 06:50:20.365772 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-568r2" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="registry-server" probeResult="failure" output=< Feb 21 06:50:20 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 06:50:20 crc kubenswrapper[4820]: > Feb 21 06:50:21 crc kubenswrapper[4820]: I0221 06:50:21.483993 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:21 crc kubenswrapper[4820]: I0221 06:50:21.526707 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/155e5f64-211d-4b89-b8dc-48f3edf80891-kube-api-access\") pod \"155e5f64-211d-4b89-b8dc-48f3edf80891\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " Feb 21 06:50:21 crc kubenswrapper[4820]: I0221 06:50:21.526793 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/155e5f64-211d-4b89-b8dc-48f3edf80891-kubelet-dir\") pod \"155e5f64-211d-4b89-b8dc-48f3edf80891\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " Feb 21 06:50:21 crc kubenswrapper[4820]: I0221 06:50:21.527262 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/155e5f64-211d-4b89-b8dc-48f3edf80891-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "155e5f64-211d-4b89-b8dc-48f3edf80891" (UID: "155e5f64-211d-4b89-b8dc-48f3edf80891"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:50:21 crc kubenswrapper[4820]: I0221 06:50:21.535085 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/155e5f64-211d-4b89-b8dc-48f3edf80891-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "155e5f64-211d-4b89-b8dc-48f3edf80891" (UID: "155e5f64-211d-4b89-b8dc-48f3edf80891"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:50:21 crc kubenswrapper[4820]: I0221 06:50:21.628742 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/155e5f64-211d-4b89-b8dc-48f3edf80891-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:21 crc kubenswrapper[4820]: I0221 06:50:21.628787 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/155e5f64-211d-4b89-b8dc-48f3edf80891-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:22 crc kubenswrapper[4820]: I0221 06:50:22.259469 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"155e5f64-211d-4b89-b8dc-48f3edf80891","Type":"ContainerDied","Data":"3bb169ae922bc05641cc6b9ccc5dce61a3e600355451aaf7bc6da01715a9061b"} Feb 21 06:50:22 crc kubenswrapper[4820]: I0221 06:50:22.259515 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bb169ae922bc05641cc6b9ccc5dce61a3e600355451aaf7bc6da01715a9061b" Feb 21 06:50:22 crc kubenswrapper[4820]: I0221 06:50:22.259812 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.535177 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 21 06:50:24 crc kubenswrapper[4820]: E0221 06:50:24.535716 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155e5f64-211d-4b89-b8dc-48f3edf80891" containerName="pruner" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.535732 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="155e5f64-211d-4b89-b8dc-48f3edf80891" containerName="pruner" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.535863 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="155e5f64-211d-4b89-b8dc-48f3edf80891" containerName="pruner" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.536324 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.544399 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.544841 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.551047 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.669060 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.669135 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.669480 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-var-lock\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.771459 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.771553 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.771633 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.771883 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-var-lock\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.771958 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-var-lock\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.787830 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.872754 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:25 crc kubenswrapper[4820]: I0221 06:50:25.070925 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 21 06:50:25 crc kubenswrapper[4820]: W0221 06:50:25.075801 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf6ac3e04_b33d_46c2_8935_502b7c8d4bfc.slice/crio-ad3d99f4b7f6fa7c22c17938bf828ace3cc179b6210328514dd88609939f9c8a WatchSource:0}: Error finding container ad3d99f4b7f6fa7c22c17938bf828ace3cc179b6210328514dd88609939f9c8a: Status 404 returned error can't find the container with id ad3d99f4b7f6fa7c22c17938bf828ace3cc179b6210328514dd88609939f9c8a Feb 21 06:50:25 crc kubenswrapper[4820]: I0221 06:50:25.275731 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc","Type":"ContainerStarted","Data":"ad3d99f4b7f6fa7c22c17938bf828ace3cc179b6210328514dd88609939f9c8a"} Feb 21 06:50:25 crc kubenswrapper[4820]: I0221 06:50:25.809189 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:50:25 crc kubenswrapper[4820]: I0221 06:50:25.848950 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:50:25 crc kubenswrapper[4820]: I0221 06:50:25.949657 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:50:25 crc kubenswrapper[4820]: I0221 06:50:25.999921 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:50:26 crc kubenswrapper[4820]: I0221 06:50:26.162472 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:50:26 crc kubenswrapper[4820]: I0221 06:50:26.203387 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:50:26 crc kubenswrapper[4820]: I0221 06:50:26.284114 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc","Type":"ContainerStarted","Data":"fd9f1cc14dd093044b63334c794d4b11879cbbf515e8afde37172cf044869902"} Feb 21 06:50:26 crc kubenswrapper[4820]: I0221 06:50:26.305490 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.305468798 podStartE2EDuration="2.305468798s" podCreationTimestamp="2026-02-21 06:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:50:26.302347039 +0000 UTC m=+201.335431257" watchObservedRunningTime="2026-02-21 06:50:26.305468798 +0000 UTC m=+201.338552996" Feb 21 06:50:27 crc kubenswrapper[4820]: I0221 06:50:27.953575 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:50:28 crc kubenswrapper[4820]: I0221 06:50:28.145371 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wfwch"] Feb 21 06:50:28 crc kubenswrapper[4820]: I0221 06:50:28.145637 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wfwch" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="registry-server" containerID="cri-o://11d9aa8adc2d52eb3d37fd794491ef7312641aeaa02431dcdb7b8157f4bf8b0f" gracePeriod=2 Feb 21 06:50:28 crc kubenswrapper[4820]: I0221 06:50:28.295183 4820 generic.go:334] "Generic (PLEG): container finished" podID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerID="11d9aa8adc2d52eb3d37fd794491ef7312641aeaa02431dcdb7b8157f4bf8b0f" exitCode=0 Feb 21 06:50:28 crc kubenswrapper[4820]: I0221 06:50:28.295253 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfwch" event={"ID":"4ad8f1e2-40cf-4c0b-aa35-d737387eca67","Type":"ContainerDied","Data":"11d9aa8adc2d52eb3d37fd794491ef7312641aeaa02431dcdb7b8157f4bf8b0f"} Feb 21 06:50:28 crc kubenswrapper[4820]: I0221 06:50:28.981106 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.019463 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.270851 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.302704 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.302721 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfwch" event={"ID":"4ad8f1e2-40cf-4c0b-aa35-d737387eca67","Type":"ContainerDied","Data":"a2ae9f307855ab18cee074942ddc0bb885feb467bda6834f2adedf2f6ba48579"} Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.302785 4820 scope.go:117] "RemoveContainer" containerID="11d9aa8adc2d52eb3d37fd794491ef7312641aeaa02431dcdb7b8157f4bf8b0f" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.326374 4820 scope.go:117] "RemoveContainer" containerID="c0facf7a97d78362dd30b0aa85074bfc5ee3fe6f4603ba8e654f5fe8d83bb24e" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.328160 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-catalog-content\") pod \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.328485 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7zsf\" (UniqueName: \"kubernetes.io/projected/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-kube-api-access-s7zsf\") pod \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.328568 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-utilities\") pod \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.329739 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-utilities" (OuterVolumeSpecName: "utilities") pod "4ad8f1e2-40cf-4c0b-aa35-d737387eca67" (UID: "4ad8f1e2-40cf-4c0b-aa35-d737387eca67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.335876 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-kube-api-access-s7zsf" (OuterVolumeSpecName: "kube-api-access-s7zsf") pod "4ad8f1e2-40cf-4c0b-aa35-d737387eca67" (UID: "4ad8f1e2-40cf-4c0b-aa35-d737387eca67"). InnerVolumeSpecName "kube-api-access-s7zsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.361577 4820 scope.go:117] "RemoveContainer" containerID="2e6da9bd9d95bf2fdd3f87da878f483c776c8f768d2149380d2d2bef1ce92197" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.375560 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.417694 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.433353 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7zsf\" (UniqueName: \"kubernetes.io/projected/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-kube-api-access-s7zsf\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.433408 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.128823 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j6kgh"] Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.129136 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j6kgh" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="registry-server" containerID="cri-o://af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8" gracePeriod=2 Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.156224 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwm8t"] Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.156662 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fwm8t" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="registry-server" containerID="cri-o://d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b" gracePeriod=2 Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.376538 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ad8f1e2-40cf-4c0b-aa35-d737387eca67" (UID: "4ad8f1e2-40cf-4c0b-aa35-d737387eca67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.426654 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.428099 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wfwch"] Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.431120 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wfwch"] Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.453436 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" podUID="a2b27a90-ce04-40f3-9656-148cca792c55" containerName="oauth-openshift" containerID="cri-o://703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab" gracePeriod=15 Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.704525 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" path="/var/lib/kubelet/pods/4ad8f1e2-40cf-4c0b-aa35-d737387eca67/volumes" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.826843 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.870841 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.877364 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.931436 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw6fw\" (UniqueName: \"kubernetes.io/projected/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-kube-api-access-nw6fw\") pod \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.931546 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-catalog-content\") pod \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.931575 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-utilities\") pod \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.932618 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-utilities" (OuterVolumeSpecName: "utilities") pod "0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" (UID: "0dd96409-63d5-46a5-a9cb-a8e59f7fcce8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.936564 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-kube-api-access-nw6fw" (OuterVolumeSpecName: "kube-api-access-nw6fw") pod "0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" (UID: "0dd96409-63d5-46a5-a9cb-a8e59f7fcce8"). InnerVolumeSpecName "kube-api-access-nw6fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.978884 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" (UID: "0dd96409-63d5-46a5-a9cb-a8e59f7fcce8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.032834 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-trusted-ca-bundle\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033124 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-ocp-branding-template\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033167 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-catalog-content\") pod \"328474dd-edf9-4d6b-b9d9-50f591176ce1\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033185 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2b27a90-ce04-40f3-9656-148cca792c55-audit-dir\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033202 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-login\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033218 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-provider-selection\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033263 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-serving-cert\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033279 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcr8k\" (UniqueName: \"kubernetes.io/projected/328474dd-edf9-4d6b-b9d9-50f591176ce1-kube-api-access-kcr8k\") pod \"328474dd-edf9-4d6b-b9d9-50f591176ce1\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033485 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-router-certs\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033508 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-error\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033527 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-service-ca\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033561 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-audit-policies\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033544 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2b27a90-ce04-40f3-9656-148cca792c55-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033577 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-session\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033689 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j82v\" (UniqueName: \"kubernetes.io/projected/a2b27a90-ce04-40f3-9656-148cca792c55-kube-api-access-4j82v\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033741 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-cliconfig\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033820 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-utilities\") pod \"328474dd-edf9-4d6b-b9d9-50f591176ce1\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033861 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-idp-0-file-data\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.034659 4820 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2b27a90-ce04-40f3-9656-148cca792c55-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.034765 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw6fw\" (UniqueName: \"kubernetes.io/projected/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-kube-api-access-nw6fw\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.034780 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.034793 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.034956 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.035050 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.035151 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.035725 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.036090 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-utilities" (OuterVolumeSpecName: "utilities") pod "328474dd-edf9-4d6b-b9d9-50f591176ce1" (UID: "328474dd-edf9-4d6b-b9d9-50f591176ce1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.038220 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.038678 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.039140 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.039886 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.040014 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b27a90-ce04-40f3-9656-148cca792c55-kube-api-access-4j82v" (OuterVolumeSpecName: "kube-api-access-4j82v") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "kube-api-access-4j82v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.040049 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.040147 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.040221 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.040415 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.041827 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328474dd-edf9-4d6b-b9d9-50f591176ce1-kube-api-access-kcr8k" (OuterVolumeSpecName: "kube-api-access-kcr8k") pod "328474dd-edf9-4d6b-b9d9-50f591176ce1" (UID: "328474dd-edf9-4d6b-b9d9-50f591176ce1"). InnerVolumeSpecName "kube-api-access-kcr8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.068762 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "328474dd-edf9-4d6b-b9d9-50f591176ce1" (UID: "328474dd-edf9-4d6b-b9d9-50f591176ce1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.135915 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.135948 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.135995 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136007 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136039 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136049 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcr8k\" (UniqueName: \"kubernetes.io/projected/328474dd-edf9-4d6b-b9d9-50f591176ce1-kube-api-access-kcr8k\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136056 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136065 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136074 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136082 4820 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136102 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136111 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136120 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136132 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j82v\" (UniqueName: \"kubernetes.io/projected/a2b27a90-ce04-40f3-9656-148cca792c55-kube-api-access-4j82v\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136142 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136151 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.384723 4820 generic.go:334] "Generic (PLEG): container finished" podID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerID="d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b" exitCode=0 Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.384797 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwm8t" event={"ID":"328474dd-edf9-4d6b-b9d9-50f591176ce1","Type":"ContainerDied","Data":"d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b"} Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.384795 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.385055 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwm8t" event={"ID":"328474dd-edf9-4d6b-b9d9-50f591176ce1","Type":"ContainerDied","Data":"f8a5b6747be5d1dc78ec352a1da0ebd534f07dbe8949b884afe0b97aa6675dad"} Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.385127 4820 scope.go:117] "RemoveContainer" containerID="d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.387996 4820 generic.go:334] "Generic (PLEG): container finished" podID="a2b27a90-ce04-40f3-9656-148cca792c55" containerID="703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab" exitCode=0 Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.388060 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" event={"ID":"a2b27a90-ce04-40f3-9656-148cca792c55","Type":"ContainerDied","Data":"703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab"} Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.388081 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" event={"ID":"a2b27a90-ce04-40f3-9656-148cca792c55","Type":"ContainerDied","Data":"163e0224df79387e94d53de67771865cc2f448fe55307754f0c2f2e2575f77bd"} Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.388099 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.390557 4820 generic.go:334] "Generic (PLEG): container finished" podID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerID="af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8" exitCode=0 Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.390584 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6kgh" event={"ID":"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8","Type":"ContainerDied","Data":"af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8"} Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.390598 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6kgh" event={"ID":"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8","Type":"ContainerDied","Data":"a1cf12a01af1b785eb3cc4bfef081e961870a39c601e9949c2b4118d5ac92237"} Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.390647 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.404319 4820 scope.go:117] "RemoveContainer" containerID="0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.428770 4820 scope.go:117] "RemoveContainer" containerID="f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.431329 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f6j4c"] Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.434136 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f6j4c"] Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.446605 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j6kgh"] Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.454327 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j6kgh"] Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.463135 4820 scope.go:117] "RemoveContainer" containerID="d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.463455 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwm8t"] Feb 21 06:50:32 crc kubenswrapper[4820]: E0221 06:50:32.464133 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b\": container with ID starting with d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b not found: ID does not exist" containerID="d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.464178 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b"} err="failed to get container status \"d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b\": rpc error: code = NotFound desc = could not find container \"d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b\": container with ID starting with d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b not found: ID does not exist" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.464290 4820 scope.go:117] "RemoveContainer" containerID="0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e" Feb 21 06:50:32 crc kubenswrapper[4820]: E0221 06:50:32.465194 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e\": container with ID starting with 0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e not found: ID does not exist" containerID="0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.465372 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e"} err="failed to get container status \"0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e\": rpc error: code = NotFound desc = could not find container \"0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e\": container with ID starting with 0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e not found: ID does not exist" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.465483 4820 scope.go:117] "RemoveContainer" containerID="f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c" Feb 21 06:50:32 crc kubenswrapper[4820]: E0221 06:50:32.465883 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c\": container with ID starting with f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c not found: ID does not exist" containerID="f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.465925 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c"} err="failed to get container status \"f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c\": rpc error: code = NotFound desc = could not find container \"f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c\": container with ID starting with f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c not found: ID does not exist" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.465952 4820 scope.go:117] "RemoveContainer" containerID="703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.468128 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwm8t"] Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.488813 4820 scope.go:117] "RemoveContainer" containerID="703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab" Feb 21 06:50:32 crc kubenswrapper[4820]: E0221 06:50:32.489330 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab\": container with ID starting with 703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab not found: ID does not exist" containerID="703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.489358 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab"} err="failed to get container status \"703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab\": rpc error: code = NotFound desc = could not find container \"703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab\": container with ID starting with 703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab not found: ID does not exist" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.489381 4820 scope.go:117] "RemoveContainer" containerID="af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.504822 4820 scope.go:117] "RemoveContainer" containerID="f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.520710 4820 scope.go:117] "RemoveContainer" containerID="38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.534356 4820 scope.go:117] "RemoveContainer" containerID="af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8" Feb 21 06:50:32 crc kubenswrapper[4820]: E0221 06:50:32.534749 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8\": container with ID starting with af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8 not found: ID does not exist" containerID="af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.534788 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8"} err="failed to get container status \"af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8\": rpc error: code = NotFound desc = could not find container \"af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8\": container with ID starting with af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8 not found: ID does not exist" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.534812 4820 scope.go:117] "RemoveContainer" containerID="f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4" Feb 21 06:50:32 crc kubenswrapper[4820]: E0221 06:50:32.535065 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4\": container with ID starting with f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4 not found: ID does not exist" containerID="f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.535098 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4"} err="failed to get container status \"f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4\": rpc error: code = NotFound desc = could not find container \"f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4\": container with ID starting with f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4 not found: ID does not exist" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.535121 4820 scope.go:117] "RemoveContainer" containerID="38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff" Feb 21 06:50:32 crc kubenswrapper[4820]: E0221 06:50:32.535491 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff\": container with ID starting with 38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff not found: ID does not exist" containerID="38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.535519 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff"} err="failed to get container status \"38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff\": rpc error: code = NotFound desc = could not find container \"38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff\": container with ID starting with 38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff not found: ID does not exist" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.754408 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-568r2"] Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.754660 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-568r2" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="registry-server" containerID="cri-o://c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf" gracePeriod=2 Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.085658 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.256671 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzzp8\" (UniqueName: \"kubernetes.io/projected/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-kube-api-access-nzzp8\") pod \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.256777 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-utilities\") pod \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.256805 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-catalog-content\") pod \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.257746 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-utilities" (OuterVolumeSpecName: "utilities") pod "8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" (UID: "8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.267457 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-kube-api-access-nzzp8" (OuterVolumeSpecName: "kube-api-access-nzzp8") pod "8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" (UID: "8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6"). InnerVolumeSpecName "kube-api-access-nzzp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.357750 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzzp8\" (UniqueName: \"kubernetes.io/projected/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-kube-api-access-nzzp8\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.357782 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.398491 4820 generic.go:334] "Generic (PLEG): container finished" podID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerID="c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf" exitCode=0 Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.398590 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-568r2" event={"ID":"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6","Type":"ContainerDied","Data":"c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf"} Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.398627 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.398711 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-568r2" event={"ID":"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6","Type":"ContainerDied","Data":"a45f177e1207be3c08153b6e35e267a2cf4dd2c4d9944405c0f459a97610a520"} Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.398735 4820 scope.go:117] "RemoveContainer" containerID="c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.414309 4820 scope.go:117] "RemoveContainer" containerID="3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.430854 4820 scope.go:117] "RemoveContainer" containerID="c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.445020 4820 scope.go:117] "RemoveContainer" containerID="c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.445513 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf\": container with ID starting with c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf not found: ID does not exist" containerID="c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.445554 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf"} err="failed to get container status \"c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf\": rpc error: code = NotFound desc = could not find container \"c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf\": container with ID starting with c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf not found: ID does not exist" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.445581 4820 scope.go:117] "RemoveContainer" containerID="3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.445949 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193\": container with ID starting with 3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193 not found: ID does not exist" containerID="3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.445982 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193"} err="failed to get container status \"3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193\": rpc error: code = NotFound desc = could not find container \"3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193\": container with ID starting with 3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193 not found: ID does not exist" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.446004 4820 scope.go:117] "RemoveContainer" containerID="c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.446402 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83\": container with ID starting with c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83 not found: ID does not exist" containerID="c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.446429 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83"} err="failed to get container status \"c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83\": rpc error: code = NotFound desc = could not find container \"c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83\": container with ID starting with c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83 not found: ID does not exist" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.447897 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" (UID: "8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.459358 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530304 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-86648b79cc-g95bw"] Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530599 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530614 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530624 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530630 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530639 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b27a90-ce04-40f3-9656-148cca792c55" containerName="oauth-openshift" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530645 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b27a90-ce04-40f3-9656-148cca792c55" containerName="oauth-openshift" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530652 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530678 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530689 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530694 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530705 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530711 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530722 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530728 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530754 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530761 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530770 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530777 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530785 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530791 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530805 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530811 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530820 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530825 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530833 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530838 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530920 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b27a90-ce04-40f3-9656-148cca792c55" containerName="oauth-openshift" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530928 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530938 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530945 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530952 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.531345 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.534780 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.535899 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.538285 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.538494 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.539118 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.539131 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.539285 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.539406 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.539902 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.542742 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-86648b79cc-g95bw"] Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.542848 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.546988 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.547900 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.549944 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.552457 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.558847 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.661820 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.661882 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-router-certs\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.661914 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.661955 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-service-ca\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.661978 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-audit-policies\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662002 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-error\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662026 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662062 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t46c7\" (UniqueName: \"kubernetes.io/projected/fe964e16-a5ab-4149-a65d-ad052695d25a-kube-api-access-t46c7\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662094 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662131 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662159 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662181 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-session\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662203 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-login\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662232 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe964e16-a5ab-4149-a65d-ad052695d25a-audit-dir\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.702467 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" path="/var/lib/kubelet/pods/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8/volumes" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.703125 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" path="/var/lib/kubelet/pods/328474dd-edf9-4d6b-b9d9-50f591176ce1/volumes" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.703850 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b27a90-ce04-40f3-9656-148cca792c55" path="/var/lib/kubelet/pods/a2b27a90-ce04-40f3-9656-148cca792c55/volumes" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.733020 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-568r2"] Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.735547 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-568r2"] Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.762891 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe964e16-a5ab-4149-a65d-ad052695d25a-audit-dir\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.762934 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.762974 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-router-certs\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.762987 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe964e16-a5ab-4149-a65d-ad052695d25a-audit-dir\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763003 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763056 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-service-ca\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763088 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-audit-policies\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763118 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763140 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-error\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763175 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t46c7\" (UniqueName: \"kubernetes.io/projected/fe964e16-a5ab-4149-a65d-ad052695d25a-kube-api-access-t46c7\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763655 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763692 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763723 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763751 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-session\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763772 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-login\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.764297 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.766050 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-audit-policies\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.766631 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-service-ca\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.767281 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.767410 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-router-certs\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.767488 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.768423 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-login\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.769515 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-error\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.769747 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-session\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.769993 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.771035 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.771715 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.779252 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t46c7\" (UniqueName: \"kubernetes.io/projected/fe964e16-a5ab-4149-a65d-ad052695d25a-kube-api-access-t46c7\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.889782 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:34 crc kubenswrapper[4820]: I0221 06:50:34.276023 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-86648b79cc-g95bw"] Feb 21 06:50:34 crc kubenswrapper[4820]: I0221 06:50:34.410598 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" event={"ID":"fe964e16-a5ab-4149-a65d-ad052695d25a","Type":"ContainerStarted","Data":"db68ead3824bb56ef79f60fd86d2fcac30607473272dc33b11592ae1794e8383"} Feb 21 06:50:35 crc kubenswrapper[4820]: I0221 06:50:35.418809 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" event={"ID":"fe964e16-a5ab-4149-a65d-ad052695d25a","Type":"ContainerStarted","Data":"a7456fcb33119538b84b9924c19c422849a220bac6941bb092e769a51c221c7f"} Feb 21 06:50:35 crc kubenswrapper[4820]: I0221 06:50:35.419268 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:35 crc kubenswrapper[4820]: I0221 06:50:35.423782 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:35 crc kubenswrapper[4820]: I0221 06:50:35.442779 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" podStartSLOduration=29.442761748 podStartE2EDuration="29.442761748s" podCreationTimestamp="2026-02-21 06:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:50:35.440198906 +0000 UTC m=+210.473283104" watchObservedRunningTime="2026-02-21 06:50:35.442761748 +0000 UTC m=+210.475845946" Feb 21 06:50:35 crc kubenswrapper[4820]: I0221 06:50:35.703009 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" path="/var/lib/kubelet/pods/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6/volumes" Feb 21 06:50:43 crc kubenswrapper[4820]: I0221 06:50:43.816051 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:50:43 crc kubenswrapper[4820]: I0221 06:50:43.816404 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:50:43 crc kubenswrapper[4820]: I0221 06:50:43.816445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:50:43 crc kubenswrapper[4820]: I0221 06:50:43.816878 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 06:50:43 crc kubenswrapper[4820]: I0221 06:50:43.816931 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb" gracePeriod=600 Feb 21 06:50:44 crc kubenswrapper[4820]: I0221 06:50:44.467513 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb" exitCode=0 Feb 21 06:50:44 crc kubenswrapper[4820]: I0221 06:50:44.467655 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb"} Feb 21 06:50:45 crc kubenswrapper[4820]: I0221 06:50:45.474901 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"32de9642d140b335669b1a18ad1b94d3e3f2b36b555260b47b1a72446c7842fb"} Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.014347 4820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.015511 4820 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.015618 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.015882 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb" gracePeriod=15 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016015 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52" gracePeriod=15 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016077 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c" gracePeriod=15 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016125 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c" gracePeriod=15 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016169 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81" gracePeriod=15 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.015919 4820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016419 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016440 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016451 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016459 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016468 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016476 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016490 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016499 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016514 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016522 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016538 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016547 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016560 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016568 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016691 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016702 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016715 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016727 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016736 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016745 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016879 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016889 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.017000 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.063624 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134066 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134107 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134133 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134150 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134179 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134206 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134254 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235197 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235264 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235286 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235316 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235368 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235376 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235406 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235414 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235437 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235462 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235467 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235437 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235516 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235542 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235588 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235637 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.356855 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.379589 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896305944a78753 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-21 06:51:03.378921299 +0000 UTC m=+238.412005497,LastTimestamp:2026-02-21 06:51:03.378921299 +0000 UTC m=+238.412005497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.558889 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6a7e3e659ca26bc70cc318d42e85eaae342dc2e65808645fd4fc3f3a6a00589b"} Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.561546 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.563618 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.564619 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52" exitCode=0 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.564645 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c" exitCode=0 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.564653 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c" exitCode=0 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.564765 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81" exitCode=2 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.564821 4820 scope.go:117] "RemoveContainer" containerID="97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.567144 4820 generic.go:334] "Generic (PLEG): container finished" podID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" containerID="fd9f1cc14dd093044b63334c794d4b11879cbbf515e8afde37172cf044869902" exitCode=0 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.567177 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc","Type":"ContainerDied","Data":"fd9f1cc14dd093044b63334c794d4b11879cbbf515e8afde37172cf044869902"} Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.567731 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.569148 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.569574 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:04 crc kubenswrapper[4820]: I0221 06:51:04.573227 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"07d46c1f5856a8c7fb05f92172496c8e5e14d734b24a90cd2abe32a41c2d224c"} Feb 21 06:51:04 crc kubenswrapper[4820]: I0221 06:51:04.574337 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:04 crc kubenswrapper[4820]: I0221 06:51:04.574536 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:04 crc kubenswrapper[4820]: I0221 06:51:04.577583 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 21 06:51:04 crc kubenswrapper[4820]: I0221 06:51:04.868705 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:51:04 crc kubenswrapper[4820]: I0221 06:51:04.869178 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:04 crc kubenswrapper[4820]: I0221 06:51:04.869441 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.057673 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kubelet-dir\") pod \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.057770 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kube-api-access\") pod \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.058372 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-var-lock\") pod \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.057868 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" (UID: "f6ac3e04-b33d-46c2-8935-502b7c8d4bfc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.058467 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-var-lock" (OuterVolumeSpecName: "var-lock") pod "f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" (UID: "f6ac3e04-b33d-46c2-8935-502b7c8d4bfc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.058565 4820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-var-lock\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.058580 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.062637 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" (UID: "f6ac3e04-b33d-46c2-8935-502b7c8d4bfc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.159443 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:05 crc kubenswrapper[4820]: E0221 06:51:05.334736 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896305944a78753 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-21 06:51:03.378921299 +0000 UTC m=+238.412005497,LastTimestamp:2026-02-21 06:51:03.378921299 +0000 UTC m=+238.412005497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.386433 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.387128 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.387794 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.388267 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.388507 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.564768 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.565127 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.564907 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.565160 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.565178 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.565263 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.565375 4820 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.565387 4820 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.565397 4820 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.585304 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.586025 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb" exitCode=0 Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.586095 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.586144 4820 scope.go:117] "RemoveContainer" containerID="0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.587523 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.587791 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc","Type":"ContainerDied","Data":"ad3d99f4b7f6fa7c22c17938bf828ace3cc179b6210328514dd88609939f9c8a"} Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.587855 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad3d99f4b7f6fa7c22c17938bf828ace3cc179b6210328514dd88609939f9c8a" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.601346 4820 scope.go:117] "RemoveContainer" containerID="20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.602544 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.602803 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.603087 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.603460 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.603710 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.603977 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.616500 4820 scope.go:117] "RemoveContainer" containerID="29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.633491 4820 scope.go:117] "RemoveContainer" containerID="eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.644569 4820 scope.go:117] "RemoveContainer" containerID="ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.660974 4820 scope.go:117] "RemoveContainer" containerID="4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.680126 4820 scope.go:117] "RemoveContainer" containerID="0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52" Feb 21 06:51:05 crc kubenswrapper[4820]: E0221 06:51:05.681697 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\": container with ID starting with 0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52 not found: ID does not exist" containerID="0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.681739 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52"} err="failed to get container status \"0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\": rpc error: code = NotFound desc = could not find container \"0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\": container with ID starting with 0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52 not found: ID does not exist" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.681770 4820 scope.go:117] "RemoveContainer" containerID="20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c" Feb 21 06:51:05 crc kubenswrapper[4820]: E0221 06:51:05.682150 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\": container with ID starting with 20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c not found: ID does not exist" containerID="20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.682188 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c"} err="failed to get container status \"20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\": rpc error: code = NotFound desc = could not find container \"20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\": container with ID starting with 20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c not found: ID does not exist" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.682209 4820 scope.go:117] "RemoveContainer" containerID="29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c" Feb 21 06:51:05 crc kubenswrapper[4820]: E0221 06:51:05.682552 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\": container with ID starting with 29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c not found: ID does not exist" containerID="29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.682594 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c"} err="failed to get container status \"29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\": rpc error: code = NotFound desc = could not find container \"29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\": container with ID starting with 29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c not found: ID does not exist" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.682623 4820 scope.go:117] "RemoveContainer" containerID="eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81" Feb 21 06:51:05 crc kubenswrapper[4820]: E0221 06:51:05.682898 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\": container with ID starting with eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81 not found: ID does not exist" containerID="eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.682921 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81"} err="failed to get container status \"eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\": rpc error: code = NotFound desc = could not find container \"eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\": container with ID starting with eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81 not found: ID does not exist" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.682936 4820 scope.go:117] "RemoveContainer" containerID="ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb" Feb 21 06:51:05 crc kubenswrapper[4820]: E0221 06:51:05.683147 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\": container with ID starting with ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb not found: ID does not exist" containerID="ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.683170 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb"} err="failed to get container status \"ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\": rpc error: code = NotFound desc = could not find container \"ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\": container with ID starting with ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb not found: ID does not exist" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.683185 4820 scope.go:117] "RemoveContainer" containerID="4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff" Feb 21 06:51:05 crc kubenswrapper[4820]: E0221 06:51:05.683409 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\": container with ID starting with 4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff not found: ID does not exist" containerID="4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.683435 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff"} err="failed to get container status \"4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\": rpc error: code = NotFound desc = could not find container \"4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\": container with ID starting with 4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff not found: ID does not exist" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.698433 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.698926 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.699116 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.707121 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 21 06:51:06 crc kubenswrapper[4820]: E0221 06:51:06.443633 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:06 crc kubenswrapper[4820]: E0221 06:51:06.444474 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:06 crc kubenswrapper[4820]: E0221 06:51:06.445364 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:06 crc kubenswrapper[4820]: E0221 06:51:06.445714 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:06 crc kubenswrapper[4820]: E0221 06:51:06.447066 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:06 crc kubenswrapper[4820]: I0221 06:51:06.447816 4820 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 21 06:51:06 crc kubenswrapper[4820]: E0221 06:51:06.449850 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="200ms" Feb 21 06:51:06 crc kubenswrapper[4820]: E0221 06:51:06.650274 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="400ms" Feb 21 06:51:07 crc kubenswrapper[4820]: E0221 06:51:07.051065 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="800ms" Feb 21 06:51:07 crc kubenswrapper[4820]: E0221 06:51:07.852173 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="1.6s" Feb 21 06:51:09 crc kubenswrapper[4820]: E0221 06:51:09.452954 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="3.2s" Feb 21 06:51:12 crc kubenswrapper[4820]: E0221 06:51:12.654054 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="6.4s" Feb 21 06:51:14 crc kubenswrapper[4820]: I0221 06:51:14.696674 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:14 crc kubenswrapper[4820]: I0221 06:51:14.698414 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:14 crc kubenswrapper[4820]: I0221 06:51:14.700150 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:14 crc kubenswrapper[4820]: I0221 06:51:14.726291 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:14 crc kubenswrapper[4820]: I0221 06:51:14.726333 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:14 crc kubenswrapper[4820]: E0221 06:51:14.726791 4820 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:14 crc kubenswrapper[4820]: I0221 06:51:14.727358 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:14 crc kubenswrapper[4820]: E0221 06:51:14.794404 4820 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" volumeName="registry-storage" Feb 21 06:51:15 crc kubenswrapper[4820]: E0221 06:51:15.336345 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896305944a78753 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-21 06:51:03.378921299 +0000 UTC m=+238.412005497,LastTimestamp:2026-02-21 06:51:03.378921299 +0000 UTC m=+238.412005497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.592599 4820 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.592652 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.640450 4820 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="735fee1979cdeff66130e88134841588ea4b7ebd53bc0aef95ad9dc4bfefaa0d" exitCode=0 Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.640522 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"735fee1979cdeff66130e88134841588ea4b7ebd53bc0aef95ad9dc4bfefaa0d"} Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.640549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"86089d6d48f38f72b42419f0bbfcc84842000fc9b9ba12de2e9e5e7a692525c6"} Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.640803 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.640815 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.641184 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: E0221 06:51:15.641202 4820 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.641416 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.643361 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.643433 4820 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551" exitCode=1 Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.643474 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551"} Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.644049 4820 scope.go:117] "RemoveContainer" containerID="f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.644511 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.644884 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.645476 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.710778 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.711107 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.711608 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.712018 4820 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:16 crc kubenswrapper[4820]: I0221 06:51:16.655713 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e221a34bf2d85decc5e599515c1b73d92baf191aa8663e0a9c9c1399c5e20a23"} Feb 21 06:51:16 crc kubenswrapper[4820]: I0221 06:51:16.656286 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3a56edaf54afc18e2138d5ebe923db837210206e361e79cd6f6bfd6560d99a8d"} Feb 21 06:51:16 crc kubenswrapper[4820]: I0221 06:51:16.656371 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9b99e59f857c162668545dcc89db7d87f03f217ecd234e28cc61ba48e2b884d4"} Feb 21 06:51:16 crc kubenswrapper[4820]: I0221 06:51:16.656483 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b1ff58c79d6f3757a338637ab24928f6a0dc80677aad8d82b75fb74fe819142f"} Feb 21 06:51:16 crc kubenswrapper[4820]: I0221 06:51:16.659280 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 21 06:51:16 crc kubenswrapper[4820]: I0221 06:51:16.659405 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"660f4b8a1ef45b3186a7b148aa1774d6e3898d55c8964df158d023b88bb35ea5"} Feb 21 06:51:17 crc kubenswrapper[4820]: I0221 06:51:17.592329 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:51:17 crc kubenswrapper[4820]: I0221 06:51:17.596520 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:51:17 crc kubenswrapper[4820]: I0221 06:51:17.668532 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"35789204aeb3e66dd268372ed837296102ab2ba444ce13c8983ad2986d638b98"} Feb 21 06:51:17 crc kubenswrapper[4820]: I0221 06:51:17.668855 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:51:17 crc kubenswrapper[4820]: I0221 06:51:17.668989 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:17 crc kubenswrapper[4820]: I0221 06:51:17.669016 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:19 crc kubenswrapper[4820]: I0221 06:51:19.727874 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:19 crc kubenswrapper[4820]: I0221 06:51:19.728185 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:19 crc kubenswrapper[4820]: I0221 06:51:19.733009 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:22 crc kubenswrapper[4820]: I0221 06:51:22.695549 4820 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:23 crc kubenswrapper[4820]: I0221 06:51:23.699394 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:23 crc kubenswrapper[4820]: I0221 06:51:23.700087 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:23 crc kubenswrapper[4820]: I0221 06:51:23.705583 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:23 crc kubenswrapper[4820]: I0221 06:51:23.706005 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:23 crc kubenswrapper[4820]: I0221 06:51:23.711889 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ebb6d7cc-a7f1-4d6c-9fdf-debc48af3b5c" Feb 21 06:51:24 crc kubenswrapper[4820]: I0221 06:51:24.703106 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:24 crc kubenswrapper[4820]: I0221 06:51:24.703134 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:25 crc kubenswrapper[4820]: I0221 06:51:25.706615 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:25 crc kubenswrapper[4820]: I0221 06:51:25.706900 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:25 crc kubenswrapper[4820]: I0221 06:51:25.711925 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ebb6d7cc-a7f1-4d6c-9fdf-debc48af3b5c" Feb 21 06:51:28 crc kubenswrapper[4820]: I0221 06:51:28.200591 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:51:32 crc kubenswrapper[4820]: I0221 06:51:32.009885 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 21 06:51:32 crc kubenswrapper[4820]: I0221 06:51:32.442761 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 21 06:51:33 crc kubenswrapper[4820]: I0221 06:51:33.289827 4820 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 21 06:51:33 crc kubenswrapper[4820]: I0221 06:51:33.313373 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 21 06:51:34 crc kubenswrapper[4820]: I0221 06:51:34.278529 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 21 06:51:34 crc kubenswrapper[4820]: I0221 06:51:34.414014 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 21 06:51:34 crc kubenswrapper[4820]: I0221 06:51:34.896264 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 21 06:51:35 crc kubenswrapper[4820]: I0221 06:51:35.067377 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 21 06:51:35 crc kubenswrapper[4820]: I0221 06:51:35.110510 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 21 06:51:35 crc kubenswrapper[4820]: I0221 06:51:35.194694 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 21 06:51:35 crc kubenswrapper[4820]: I0221 06:51:35.459749 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 21 06:51:35 crc kubenswrapper[4820]: I0221 06:51:35.854942 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 21 06:51:35 crc kubenswrapper[4820]: I0221 06:51:35.886668 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.143484 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.464888 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.540537 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.547689 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.551562 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.578622 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.593090 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.652326 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.754080 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.775717 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.862785 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.865301 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.951777 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.952401 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.989767 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.067594 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.093968 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.297638 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.402116 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.442141 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.529448 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.592371 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.624992 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.762922 4820 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.811948 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.833319 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.839916 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.933258 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.154536 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.231820 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.258786 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.308796 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.331182 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.339258 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.392082 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.466990 4820 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.555995 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.573281 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.610099 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.722181 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.987636 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.990813 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.000993 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.019716 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.029602 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.068249 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.078265 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.095022 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.206889 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.264737 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.292982 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.348455 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.435897 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.459547 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.480812 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.481140 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.537295 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.583080 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.584341 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.587079 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.621499 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.659481 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.682281 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.695578 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.698096 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.853022 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.928470 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.939872 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.004083 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.013020 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.015402 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.032930 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.075818 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.093624 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.109174 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.152320 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.196534 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.309136 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.322043 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.323621 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.368524 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.381908 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.474015 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.480424 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.562452 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.570029 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.593582 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.862426 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.930222 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.045356 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.118578 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.125723 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.161645 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.164343 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.194146 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.216494 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.232012 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.232259 4820 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.245744 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.258879 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.273135 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.292544 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.390894 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.402773 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.528386 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.575649 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.596473 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.618709 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.639327 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.772408 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.808678 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.837261 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.886882 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.908937 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.922231 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.048643 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.090491 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.150739 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.171331 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.496611 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.601564 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.649013 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.738253 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.827931 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.899438 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.899492 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.961266 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.081216 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.119473 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.501098 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.567139 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.601209 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.619590 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.651845 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.732814 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.948544 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.003307 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.052973 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.134112 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.144529 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.169459 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.191510 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.212233 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.236925 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.250935 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.290067 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.414512 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.450426 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.542837 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.756157 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.898132 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.965124 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.971547 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.001775 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.048828 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.080348 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.184597 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.199818 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.214132 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.304311 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.359375 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.366107 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.407066 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.453838 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.597078 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.598369 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.616729 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.634304 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.928826 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.932763 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.939888 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.015445 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.095331 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.196073 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.240619 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.342934 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.344319 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.355381 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.386611 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.461630 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.524598 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.586347 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.604599 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.695803 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.700935 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.749556 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.856906 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.957854 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.139340 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.191765 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.317766 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.324724 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.330880 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.363990 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.367384 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.536791 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.565568 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.670321 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.775498 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.849880 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.887680 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.897995 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.954428 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.179631 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.243317 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.284942 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.308661 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.336179 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.433900 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.496969 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.634426 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.680102 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.786902 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.845927 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.900964 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.965371 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.986295 4820 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.989997 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.989969981 podStartE2EDuration="45.989969981s" podCreationTimestamp="2026-02-21 06:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:51:22.399957721 +0000 UTC m=+257.433041949" watchObservedRunningTime="2026-02-21 06:51:48.989969981 +0000 UTC m=+284.023054229" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.993169 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.993239 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/marketplace-operator-79b997595-wq5r9"] Feb 21 06:51:48 crc kubenswrapper[4820]: E0221 06:51:48.993554 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" containerName="installer" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.993578 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" containerName="installer" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.993760 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" containerName="installer" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.994309 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dtbbw","openshift-marketplace/community-operators-gt7zt","openshift-marketplace/redhat-marketplace-wfq7z","openshift-marketplace/redhat-operators-zcn45","openshift-marketplace/marketplace-operator-79b997595-k58x6"] Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.994385 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.994646 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dtbbw" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="registry-server" containerID="cri-o://da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9" gracePeriod=30 Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.994764 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gt7zt" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="registry-server" containerID="cri-o://88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8" gracePeriod=30 Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.994890 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wfq7z" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="registry-server" containerID="cri-o://10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e" gracePeriod=30 Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.995042 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zcn45" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="registry-server" containerID="cri-o://5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1" gracePeriod=30 Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.995130 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerName="marketplace-operator" containerID="cri-o://d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855" gracePeriod=30 Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.011182 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.016328 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.061819 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=27.061803249 podStartE2EDuration="27.061803249s" podCreationTimestamp="2026-02-21 06:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:51:49.03387436 +0000 UTC m=+284.066958558" watchObservedRunningTime="2026-02-21 06:51:49.061803249 +0000 UTC m=+284.094887437" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.075296 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6h8r\" (UniqueName: \"kubernetes.io/projected/37683f41-a9aa-4abd-809d-25df5114e93a-kube-api-access-k6h8r\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.075361 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37683f41-a9aa-4abd-809d-25df5114e93a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.075391 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37683f41-a9aa-4abd-809d-25df5114e93a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.146005 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.176631 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6h8r\" (UniqueName: \"kubernetes.io/projected/37683f41-a9aa-4abd-809d-25df5114e93a-kube-api-access-k6h8r\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.176704 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37683f41-a9aa-4abd-809d-25df5114e93a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.176737 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37683f41-a9aa-4abd-809d-25df5114e93a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.178489 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37683f41-a9aa-4abd-809d-25df5114e93a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.184323 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37683f41-a9aa-4abd-809d-25df5114e93a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.202412 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6h8r\" (UniqueName: \"kubernetes.io/projected/37683f41-a9aa-4abd-809d-25df5114e93a-kube-api-access-k6h8r\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.205445 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.313339 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.313583 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.403224 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.412127 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.418411 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.427622 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.429462 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.430376 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.439586 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.472781 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489050 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snxb4\" (UniqueName: \"kubernetes.io/projected/04595c48-2a70-4760-8e24-5266735b9e82-kube-api-access-snxb4\") pod \"04595c48-2a70-4760-8e24-5266735b9e82\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489566 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-catalog-content\") pod \"88718c88-6c0d-4eb1-af7e-14353e291e27\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489594 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d498c\" (UniqueName: \"kubernetes.io/projected/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-kube-api-access-d498c\") pod \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489615 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7jnw\" (UniqueName: \"kubernetes.io/projected/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-kube-api-access-b7jnw\") pod \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489633 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-catalog-content\") pod \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489655 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-utilities\") pod \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489673 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-utilities\") pod \"04595c48-2a70-4760-8e24-5266735b9e82\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489733 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rknh8\" (UniqueName: \"kubernetes.io/projected/88718c88-6c0d-4eb1-af7e-14353e291e27-kube-api-access-rknh8\") pod \"88718c88-6c0d-4eb1-af7e-14353e291e27\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489767 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-utilities\") pod \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489849 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4wvb\" (UniqueName: \"kubernetes.io/projected/73ed3342-c0c6-46e6-a021-e3c6578829f6-kube-api-access-c4wvb\") pod \"73ed3342-c0c6-46e6-a021-e3c6578829f6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489868 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-operator-metrics\") pod \"73ed3342-c0c6-46e6-a021-e3c6578829f6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489888 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-trusted-ca\") pod \"73ed3342-c0c6-46e6-a021-e3c6578829f6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489905 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-catalog-content\") pod \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489922 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-catalog-content\") pod \"04595c48-2a70-4760-8e24-5266735b9e82\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489942 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-utilities\") pod \"88718c88-6c0d-4eb1-af7e-14353e291e27\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.490711 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-utilities" (OuterVolumeSpecName: "utilities") pod "88718c88-6c0d-4eb1-af7e-14353e291e27" (UID: "88718c88-6c0d-4eb1-af7e-14353e291e27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.491248 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-utilities" (OuterVolumeSpecName: "utilities") pod "9c9aa300-090c-44cb-91ed-1c1bdc44cbae" (UID: "9c9aa300-090c-44cb-91ed-1c1bdc44cbae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.491809 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04595c48-2a70-4760-8e24-5266735b9e82-kube-api-access-snxb4" (OuterVolumeSpecName: "kube-api-access-snxb4") pod "04595c48-2a70-4760-8e24-5266735b9e82" (UID: "04595c48-2a70-4760-8e24-5266735b9e82"). InnerVolumeSpecName "kube-api-access-snxb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.492221 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-utilities" (OuterVolumeSpecName: "utilities") pod "62bc411a-7f2e-4a7c-8a27-d758d4716f0e" (UID: "62bc411a-7f2e-4a7c-8a27-d758d4716f0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.492486 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-utilities" (OuterVolumeSpecName: "utilities") pod "04595c48-2a70-4760-8e24-5266735b9e82" (UID: "04595c48-2a70-4760-8e24-5266735b9e82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.493181 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "73ed3342-c0c6-46e6-a021-e3c6578829f6" (UID: "73ed3342-c0c6-46e6-a021-e3c6578829f6"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.493468 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88718c88-6c0d-4eb1-af7e-14353e291e27-kube-api-access-rknh8" (OuterVolumeSpecName: "kube-api-access-rknh8") pod "88718c88-6c0d-4eb1-af7e-14353e291e27" (UID: "88718c88-6c0d-4eb1-af7e-14353e291e27"). InnerVolumeSpecName "kube-api-access-rknh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.494801 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-kube-api-access-d498c" (OuterVolumeSpecName: "kube-api-access-d498c") pod "9c9aa300-090c-44cb-91ed-1c1bdc44cbae" (UID: "9c9aa300-090c-44cb-91ed-1c1bdc44cbae"). InnerVolumeSpecName "kube-api-access-d498c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.495285 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-kube-api-access-b7jnw" (OuterVolumeSpecName: "kube-api-access-b7jnw") pod "62bc411a-7f2e-4a7c-8a27-d758d4716f0e" (UID: "62bc411a-7f2e-4a7c-8a27-d758d4716f0e"). InnerVolumeSpecName "kube-api-access-b7jnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.495828 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ed3342-c0c6-46e6-a021-e3c6578829f6-kube-api-access-c4wvb" (OuterVolumeSpecName: "kube-api-access-c4wvb") pod "73ed3342-c0c6-46e6-a021-e3c6578829f6" (UID: "73ed3342-c0c6-46e6-a021-e3c6578829f6"). InnerVolumeSpecName "kube-api-access-c4wvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.503758 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "73ed3342-c0c6-46e6-a021-e3c6578829f6" (UID: "73ed3342-c0c6-46e6-a021-e3c6578829f6"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.522584 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62bc411a-7f2e-4a7c-8a27-d758d4716f0e" (UID: "62bc411a-7f2e-4a7c-8a27-d758d4716f0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.544065 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.550041 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c9aa300-090c-44cb-91ed-1c1bdc44cbae" (UID: "9c9aa300-090c-44cb-91ed-1c1bdc44cbae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.560043 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88718c88-6c0d-4eb1-af7e-14353e291e27" (UID: "88718c88-6c0d-4eb1-af7e-14353e291e27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.571644 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590356 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7jnw\" (UniqueName: \"kubernetes.io/projected/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-kube-api-access-b7jnw\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590383 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590391 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590400 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590408 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rknh8\" (UniqueName: \"kubernetes.io/projected/88718c88-6c0d-4eb1-af7e-14353e291e27-kube-api-access-rknh8\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590417 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590425 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4wvb\" (UniqueName: \"kubernetes.io/projected/73ed3342-c0c6-46e6-a021-e3c6578829f6-kube-api-access-c4wvb\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590436 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590444 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590451 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590459 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590467 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snxb4\" (UniqueName: \"kubernetes.io/projected/04595c48-2a70-4760-8e24-5266735b9e82-kube-api-access-snxb4\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590476 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590484 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d498c\" (UniqueName: \"kubernetes.io/projected/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-kube-api-access-d498c\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.598215 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.622175 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04595c48-2a70-4760-8e24-5266735b9e82" (UID: "04595c48-2a70-4760-8e24-5266735b9e82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.686730 4820 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.691043 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.722983 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wq5r9"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.731844 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.825285 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" event={"ID":"37683f41-a9aa-4abd-809d-25df5114e93a","Type":"ContainerStarted","Data":"ae5150e53962cd919ec6950a7adfca79ac114b6fa78479c0e6d8bb6e7c605f4b"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.827265 4820 generic.go:334] "Generic (PLEG): container finished" podID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerID="da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9" exitCode=0 Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.827337 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.827356 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtbbw" event={"ID":"9c9aa300-090c-44cb-91ed-1c1bdc44cbae","Type":"ContainerDied","Data":"da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.827786 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtbbw" event={"ID":"9c9aa300-090c-44cb-91ed-1c1bdc44cbae","Type":"ContainerDied","Data":"1186b29bef767e21ec1c625c6cc6253779166154a0a774141ac1f83ba9af24e6"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.827804 4820 scope.go:117] "RemoveContainer" containerID="da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.829862 4820 generic.go:334] "Generic (PLEG): container finished" podID="04595c48-2a70-4760-8e24-5266735b9e82" containerID="5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1" exitCode=0 Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.829929 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcn45" event={"ID":"04595c48-2a70-4760-8e24-5266735b9e82","Type":"ContainerDied","Data":"5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.829957 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcn45" event={"ID":"04595c48-2a70-4760-8e24-5266735b9e82","Type":"ContainerDied","Data":"85b548c074e9ee1f2673409e289d81bf0908133ef92294aa7291d120aa6cc445"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.829931 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.833212 4820 generic.go:334] "Generic (PLEG): container finished" podID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerID="10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e" exitCode=0 Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.833281 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfq7z" event={"ID":"62bc411a-7f2e-4a7c-8a27-d758d4716f0e","Type":"ContainerDied","Data":"10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.833301 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfq7z" event={"ID":"62bc411a-7f2e-4a7c-8a27-d758d4716f0e","Type":"ContainerDied","Data":"0313a503380cf7228ea4e19fb74b8d644a2d0e9f2e03718d0432d7e8be1cd955"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.833357 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.837870 4820 generic.go:334] "Generic (PLEG): container finished" podID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerID="d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855" exitCode=0 Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.837915 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" event={"ID":"73ed3342-c0c6-46e6-a021-e3c6578829f6","Type":"ContainerDied","Data":"d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.837932 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" event={"ID":"73ed3342-c0c6-46e6-a021-e3c6578829f6","Type":"ContainerDied","Data":"c67db1d6ea1ea9f42d159552b399ae3814a8a2a153770e3fc34b2a49bbb171e0"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.837974 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.841126 4820 generic.go:334] "Generic (PLEG): container finished" podID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerID="88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8" exitCode=0 Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.841320 4820 scope.go:117] "RemoveContainer" containerID="b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.841465 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.841708 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt7zt" event={"ID":"88718c88-6c0d-4eb1-af7e-14353e291e27","Type":"ContainerDied","Data":"88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.841733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt7zt" event={"ID":"88718c88-6c0d-4eb1-af7e-14353e291e27","Type":"ContainerDied","Data":"04fd41dbab4d8a603151ac33844cbba8ff658b873d854bbf23d1ef0e3e50dc39"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.849884 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dtbbw"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.854253 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dtbbw"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.865716 4820 scope.go:117] "RemoveContainer" containerID="7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.866752 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfq7z"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.871965 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfq7z"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.876688 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zcn45"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.882400 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zcn45"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.886445 4820 scope.go:117] "RemoveContainer" containerID="da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.886881 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9\": container with ID starting with da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9 not found: ID does not exist" containerID="da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.886923 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9"} err="failed to get container status \"da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9\": rpc error: code = NotFound desc = could not find container \"da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9\": container with ID starting with da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9 not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.886956 4820 scope.go:117] "RemoveContainer" containerID="b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.887020 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k58x6"] Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.887263 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72\": container with ID starting with b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72 not found: ID does not exist" containerID="b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.887294 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72"} err="failed to get container status \"b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72\": rpc error: code = NotFound desc = could not find container \"b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72\": container with ID starting with b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72 not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.887313 4820 scope.go:117] "RemoveContainer" containerID="7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.887585 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38\": container with ID starting with 7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38 not found: ID does not exist" containerID="7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.887618 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38"} err="failed to get container status \"7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38\": rpc error: code = NotFound desc = could not find container \"7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38\": container with ID starting with 7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38 not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.887646 4820 scope.go:117] "RemoveContainer" containerID="5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.890979 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k58x6"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.898448 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gt7zt"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.901237 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gt7zt"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.901376 4820 scope.go:117] "RemoveContainer" containerID="dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.914599 4820 scope.go:117] "RemoveContainer" containerID="136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.924860 4820 scope.go:117] "RemoveContainer" containerID="5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.925291 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1\": container with ID starting with 5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1 not found: ID does not exist" containerID="5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.925323 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1"} err="failed to get container status \"5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1\": rpc error: code = NotFound desc = could not find container \"5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1\": container with ID starting with 5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1 not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.925354 4820 scope.go:117] "RemoveContainer" containerID="dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.925719 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8\": container with ID starting with dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8 not found: ID does not exist" containerID="dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.925742 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8"} err="failed to get container status \"dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8\": rpc error: code = NotFound desc = could not find container \"dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8\": container with ID starting with dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8 not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.925758 4820 scope.go:117] "RemoveContainer" containerID="136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.926032 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597\": container with ID starting with 136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597 not found: ID does not exist" containerID="136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.926058 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597"} err="failed to get container status \"136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597\": rpc error: code = NotFound desc = could not find container \"136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597\": container with ID starting with 136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597 not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.926076 4820 scope.go:117] "RemoveContainer" containerID="10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.941415 4820 scope.go:117] "RemoveContainer" containerID="a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.950668 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.961610 4820 scope.go:117] "RemoveContainer" containerID="ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.982145 4820 scope.go:117] "RemoveContainer" containerID="10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.982710 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e\": container with ID starting with 10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e not found: ID does not exist" containerID="10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.982769 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e"} err="failed to get container status \"10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e\": rpc error: code = NotFound desc = could not find container \"10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e\": container with ID starting with 10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.982797 4820 scope.go:117] "RemoveContainer" containerID="a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.983144 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac\": container with ID starting with a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac not found: ID does not exist" containerID="a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.983230 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac"} err="failed to get container status \"a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac\": rpc error: code = NotFound desc = could not find container \"a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac\": container with ID starting with a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.983362 4820 scope.go:117] "RemoveContainer" containerID="ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.984204 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03\": container with ID starting with ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03 not found: ID does not exist" containerID="ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.984227 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03"} err="failed to get container status \"ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03\": rpc error: code = NotFound desc = could not find container \"ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03\": container with ID starting with ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03 not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.984242 4820 scope.go:117] "RemoveContainer" containerID="d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:49.999486 4820 scope.go:117] "RemoveContainer" containerID="d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855" Feb 21 06:51:50 crc kubenswrapper[4820]: E0221 06:51:49.999757 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855\": container with ID starting with d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855 not found: ID does not exist" containerID="d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:49.999774 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855"} err="failed to get container status \"d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855\": rpc error: code = NotFound desc = could not find container \"d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855\": container with ID starting with d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855 not found: ID does not exist" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:49.999791 4820 scope.go:117] "RemoveContainer" containerID="88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.014458 4820 scope.go:117] "RemoveContainer" containerID="ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.030537 4820 scope.go:117] "RemoveContainer" containerID="74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.032007 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.043754 4820 scope.go:117] "RemoveContainer" containerID="88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8" Feb 21 06:51:50 crc kubenswrapper[4820]: E0221 06:51:50.044141 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8\": container with ID starting with 88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8 not found: ID does not exist" containerID="88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.044244 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8"} err="failed to get container status \"88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8\": rpc error: code = NotFound desc = could not find container \"88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8\": container with ID starting with 88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8 not found: ID does not exist" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.044339 4820 scope.go:117] "RemoveContainer" containerID="ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79" Feb 21 06:51:50 crc kubenswrapper[4820]: E0221 06:51:50.044769 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79\": container with ID starting with ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79 not found: ID does not exist" containerID="ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.044795 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79"} err="failed to get container status \"ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79\": rpc error: code = NotFound desc = could not find container \"ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79\": container with ID starting with ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79 not found: ID does not exist" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.044814 4820 scope.go:117] "RemoveContainer" containerID="74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd" Feb 21 06:51:50 crc kubenswrapper[4820]: E0221 06:51:50.045471 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd\": container with ID starting with 74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd not found: ID does not exist" containerID="74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.045596 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd"} err="failed to get container status \"74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd\": rpc error: code = NotFound desc = could not find container \"74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd\": container with ID starting with 74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd not found: ID does not exist" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.448126 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.598683 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.754687 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.852936 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" event={"ID":"37683f41-a9aa-4abd-809d-25df5114e93a","Type":"ContainerStarted","Data":"cea7128fa56505882f5ed35821fa8f478e31a70c1b3af21a7c999c22c108f559"} Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.853225 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.856591 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.870930 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" podStartSLOduration=12.870911591 podStartE2EDuration="12.870911591s" podCreationTimestamp="2026-02-21 06:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:51:50.866998234 +0000 UTC m=+285.900082432" watchObservedRunningTime="2026-02-21 06:51:50.870911591 +0000 UTC m=+285.903995789" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.992882 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.149331 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.699903 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.703154 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04595c48-2a70-4760-8e24-5266735b9e82" path="/var/lib/kubelet/pods/04595c48-2a70-4760-8e24-5266735b9e82/volumes" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.703972 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" path="/var/lib/kubelet/pods/62bc411a-7f2e-4a7c-8a27-d758d4716f0e/volumes" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.704733 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" path="/var/lib/kubelet/pods/73ed3342-c0c6-46e6-a021-e3c6578829f6/volumes" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.705815 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" path="/var/lib/kubelet/pods/88718c88-6c0d-4eb1-af7e-14353e291e27/volumes" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.706594 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" path="/var/lib/kubelet/pods/9c9aa300-090c-44cb-91ed-1c1bdc44cbae/volumes" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.774559 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 21 06:51:52 crc kubenswrapper[4820]: I0221 06:51:52.083490 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 21 06:51:56 crc kubenswrapper[4820]: I0221 06:51:56.387575 4820 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 21 06:51:56 crc kubenswrapper[4820]: I0221 06:51:56.388333 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://07d46c1f5856a8c7fb05f92172496c8e5e14d734b24a90cd2abe32a41c2d224c" gracePeriod=5 Feb 21 06:52:01 crc kubenswrapper[4820]: I0221 06:52:01.908296 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 21 06:52:01 crc kubenswrapper[4820]: I0221 06:52:01.908710 4820 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="07d46c1f5856a8c7fb05f92172496c8e5e14d734b24a90cd2abe32a41c2d224c" exitCode=137 Feb 21 06:52:01 crc kubenswrapper[4820]: I0221 06:52:01.951004 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 21 06:52:01 crc kubenswrapper[4820]: I0221 06:52:01.951292 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138407 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138486 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138504 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138503 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138538 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138563 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138592 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138673 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138958 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.139136 4820 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.139148 4820 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.139157 4820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.139166 4820 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.145343 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.239929 4820 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.915683 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.915751 4820 scope.go:117] "RemoveContainer" containerID="07d46c1f5856a8c7fb05f92172496c8e5e14d734b24a90cd2abe32a41c2d224c" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.915842 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:52:03 crc kubenswrapper[4820]: I0221 06:52:03.702675 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 21 06:52:03 crc kubenswrapper[4820]: I0221 06:52:03.703691 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 21 06:52:03 crc kubenswrapper[4820]: I0221 06:52:03.713753 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 21 06:52:03 crc kubenswrapper[4820]: I0221 06:52:03.713794 4820 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="58731c15-9a5c-48a8-b456-56e4448dae4f" Feb 21 06:52:03 crc kubenswrapper[4820]: I0221 06:52:03.717848 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 21 06:52:03 crc kubenswrapper[4820]: I0221 06:52:03.717981 4820 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="58731c15-9a5c-48a8-b456-56e4448dae4f" Feb 21 06:52:05 crc kubenswrapper[4820]: I0221 06:52:05.469872 4820 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.408516 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhsbz"] Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.408716 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" podUID="bec4e07b-2745-4a45-8717-3ee01f99919e" containerName="controller-manager" containerID="cri-o://a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053" gracePeriod=30 Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.509808 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x"] Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.510007 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" podUID="a584a459-0672-47ef-bb32-c79f31790f91" containerName="route-controller-manager" containerID="cri-o://d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358" gracePeriod=30 Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.808915 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.869540 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.938144 4820 generic.go:334] "Generic (PLEG): container finished" podID="bec4e07b-2745-4a45-8717-3ee01f99919e" containerID="a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053" exitCode=0 Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.938183 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.938195 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" event={"ID":"bec4e07b-2745-4a45-8717-3ee01f99919e","Type":"ContainerDied","Data":"a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053"} Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.938258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" event={"ID":"bec4e07b-2745-4a45-8717-3ee01f99919e","Type":"ContainerDied","Data":"4d78f1e45a0c6a4cb8ba55254cd92ac8d35c6e02d5bd767c1be192646a5e40fd"} Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.938280 4820 scope.go:117] "RemoveContainer" containerID="a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.939685 4820 generic.go:334] "Generic (PLEG): container finished" podID="a584a459-0672-47ef-bb32-c79f31790f91" containerID="d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358" exitCode=0 Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.939721 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" event={"ID":"a584a459-0672-47ef-bb32-c79f31790f91","Type":"ContainerDied","Data":"d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358"} Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.939752 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" event={"ID":"a584a459-0672-47ef-bb32-c79f31790f91","Type":"ContainerDied","Data":"9f4896a106314bc994acfd7faee81b0d6630a37fbb60ec630db8d04e58c2928f"} Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.939786 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.953454 4820 scope.go:117] "RemoveContainer" containerID="a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053" Feb 21 06:52:06 crc kubenswrapper[4820]: E0221 06:52:06.953837 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053\": container with ID starting with a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053 not found: ID does not exist" containerID="a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.953869 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053"} err="failed to get container status \"a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053\": rpc error: code = NotFound desc = could not find container \"a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053\": container with ID starting with a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053 not found: ID does not exist" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.953890 4820 scope.go:117] "RemoveContainer" containerID="d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.965215 4820 scope.go:117] "RemoveContainer" containerID="d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358" Feb 21 06:52:06 crc kubenswrapper[4820]: E0221 06:52:06.965573 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358\": container with ID starting with d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358 not found: ID does not exist" containerID="d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.965597 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358"} err="failed to get container status \"d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358\": rpc error: code = NotFound desc = could not find container \"d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358\": container with ID starting with d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358 not found: ID does not exist" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994159 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a584a459-0672-47ef-bb32-c79f31790f91-serving-cert\") pod \"a584a459-0672-47ef-bb32-c79f31790f91\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994226 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-client-ca\") pod \"bec4e07b-2745-4a45-8717-3ee01f99919e\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994286 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-config\") pod \"bec4e07b-2745-4a45-8717-3ee01f99919e\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994329 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-client-ca\") pod \"a584a459-0672-47ef-bb32-c79f31790f91\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994356 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-config\") pod \"a584a459-0672-47ef-bb32-c79f31790f91\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994395 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84hkx\" (UniqueName: \"kubernetes.io/projected/a584a459-0672-47ef-bb32-c79f31790f91-kube-api-access-84hkx\") pod \"a584a459-0672-47ef-bb32-c79f31790f91\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994424 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxxhr\" (UniqueName: \"kubernetes.io/projected/bec4e07b-2745-4a45-8717-3ee01f99919e-kube-api-access-qxxhr\") pod \"bec4e07b-2745-4a45-8717-3ee01f99919e\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994438 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-proxy-ca-bundles\") pod \"bec4e07b-2745-4a45-8717-3ee01f99919e\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994454 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bec4e07b-2745-4a45-8717-3ee01f99919e-serving-cert\") pod \"bec4e07b-2745-4a45-8717-3ee01f99919e\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.995171 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-client-ca" (OuterVolumeSpecName: "client-ca") pod "a584a459-0672-47ef-bb32-c79f31790f91" (UID: "a584a459-0672-47ef-bb32-c79f31790f91"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.995251 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bec4e07b-2745-4a45-8717-3ee01f99919e" (UID: "bec4e07b-2745-4a45-8717-3ee01f99919e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.995355 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-config" (OuterVolumeSpecName: "config") pod "a584a459-0672-47ef-bb32-c79f31790f91" (UID: "a584a459-0672-47ef-bb32-c79f31790f91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.995377 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-config" (OuterVolumeSpecName: "config") pod "bec4e07b-2745-4a45-8717-3ee01f99919e" (UID: "bec4e07b-2745-4a45-8717-3ee01f99919e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.995725 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-client-ca" (OuterVolumeSpecName: "client-ca") pod "bec4e07b-2745-4a45-8717-3ee01f99919e" (UID: "bec4e07b-2745-4a45-8717-3ee01f99919e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.999419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bec4e07b-2745-4a45-8717-3ee01f99919e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bec4e07b-2745-4a45-8717-3ee01f99919e" (UID: "bec4e07b-2745-4a45-8717-3ee01f99919e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.999499 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a584a459-0672-47ef-bb32-c79f31790f91-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a584a459-0672-47ef-bb32-c79f31790f91" (UID: "a584a459-0672-47ef-bb32-c79f31790f91"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.999525 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec4e07b-2745-4a45-8717-3ee01f99919e-kube-api-access-qxxhr" (OuterVolumeSpecName: "kube-api-access-qxxhr") pod "bec4e07b-2745-4a45-8717-3ee01f99919e" (UID: "bec4e07b-2745-4a45-8717-3ee01f99919e"). InnerVolumeSpecName "kube-api-access-qxxhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.999682 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a584a459-0672-47ef-bb32-c79f31790f91-kube-api-access-84hkx" (OuterVolumeSpecName: "kube-api-access-84hkx") pod "a584a459-0672-47ef-bb32-c79f31790f91" (UID: "a584a459-0672-47ef-bb32-c79f31790f91"). InnerVolumeSpecName "kube-api-access-84hkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095771 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a584a459-0672-47ef-bb32-c79f31790f91-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095810 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095819 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095827 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095836 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095845 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84hkx\" (UniqueName: \"kubernetes.io/projected/a584a459-0672-47ef-bb32-c79f31790f91-kube-api-access-84hkx\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095855 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxxhr\" (UniqueName: \"kubernetes.io/projected/bec4e07b-2745-4a45-8717-3ee01f99919e-kube-api-access-qxxhr\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095863 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095872 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bec4e07b-2745-4a45-8717-3ee01f99919e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.275170 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.281447 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.284455 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhsbz"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.287013 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhsbz"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.584794 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76464bf686-krxfn"] Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585064 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585080 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585091 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585098 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585111 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585118 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585131 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585138 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585146 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585152 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585163 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585171 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585184 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585191 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585203 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585210 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585219 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585226 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585256 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585265 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585276 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585283 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585291 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec4e07b-2745-4a45-8717-3ee01f99919e" containerName="controller-manager" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585298 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec4e07b-2745-4a45-8717-3ee01f99919e" containerName="controller-manager" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585308 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585314 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585323 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerName="marketplace-operator" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585330 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerName="marketplace-operator" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585338 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585345 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585352 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a584a459-0672-47ef-bb32-c79f31790f91" containerName="route-controller-manager" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585362 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a584a459-0672-47ef-bb32-c79f31790f91" containerName="route-controller-manager" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585458 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585477 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerName="marketplace-operator" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585489 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585498 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585508 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585517 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585524 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec4e07b-2745-4a45-8717-3ee01f99919e" containerName="controller-manager" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585531 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a584a459-0672-47ef-bb32-c79f31790f91" containerName="route-controller-manager" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585992 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.588177 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.588200 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.588663 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.588788 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.588882 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.588923 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.589021 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.589193 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.590978 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.590980 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.591087 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.591469 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.591529 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.592182 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.596733 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.598018 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.604401 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76464bf686-krxfn"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.702087 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a584a459-0672-47ef-bb32-c79f31790f91" path="/var/lib/kubelet/pods/a584a459-0672-47ef-bb32-c79f31790f91/volumes" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.702612 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec4e07b-2745-4a45-8717-3ee01f99919e" path="/var/lib/kubelet/pods/bec4e07b-2745-4a45-8717-3ee01f99919e/volumes" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.703889 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs6z6\" (UniqueName: \"kubernetes.io/projected/41ae1fcb-c09f-4395-8e87-b5ae4206c608-kube-api-access-fs6z6\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.703950 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-client-ca\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.703997 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-config\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.704033 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a882b7-b656-49ef-8854-266b0c82f673-serving-cert\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.704056 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-client-ca\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.704118 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-config\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.704139 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ae1fcb-c09f-4395-8e87-b5ae4206c608-serving-cert\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.704158 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-proxy-ca-bundles\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.704172 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vcbz\" (UniqueName: \"kubernetes.io/projected/d9a882b7-b656-49ef-8854-266b0c82f673-kube-api-access-6vcbz\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.805987 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-client-ca\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806051 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-config\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ae1fcb-c09f-4395-8e87-b5ae4206c608-serving-cert\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806091 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-proxy-ca-bundles\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vcbz\" (UniqueName: \"kubernetes.io/projected/d9a882b7-b656-49ef-8854-266b0c82f673-kube-api-access-6vcbz\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806133 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs6z6\" (UniqueName: \"kubernetes.io/projected/41ae1fcb-c09f-4395-8e87-b5ae4206c608-kube-api-access-fs6z6\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806152 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-client-ca\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806174 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-config\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806191 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a882b7-b656-49ef-8854-266b0c82f673-serving-cert\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.808497 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-client-ca\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.808833 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-client-ca\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.808948 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-config\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.809037 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-proxy-ca-bundles\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.809201 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-config\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.817219 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ae1fcb-c09f-4395-8e87-b5ae4206c608-serving-cert\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.817403 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a882b7-b656-49ef-8854-266b0c82f673-serving-cert\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.825125 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76464bf686-krxfn"] Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.825717 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-6vcbz], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" podUID="d9a882b7-b656-49ef-8854-266b0c82f673" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.833965 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs6z6\" (UniqueName: \"kubernetes.io/projected/41ae1fcb-c09f-4395-8e87-b5ae4206c608-kube-api-access-fs6z6\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.836713 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.837142 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.842108 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vcbz\" (UniqueName: \"kubernetes.io/projected/d9a882b7-b656-49ef-8854-266b0c82f673-kube-api-access-6vcbz\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.951272 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.967044 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.007525 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vcbz\" (UniqueName: \"kubernetes.io/projected/d9a882b7-b656-49ef-8854-266b0c82f673-kube-api-access-6vcbz\") pod \"d9a882b7-b656-49ef-8854-266b0c82f673\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.007560 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a882b7-b656-49ef-8854-266b0c82f673-serving-cert\") pod \"d9a882b7-b656-49ef-8854-266b0c82f673\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.007592 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-config\") pod \"d9a882b7-b656-49ef-8854-266b0c82f673\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.007628 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-client-ca\") pod \"d9a882b7-b656-49ef-8854-266b0c82f673\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.007655 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-proxy-ca-bundles\") pod \"d9a882b7-b656-49ef-8854-266b0c82f673\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.008266 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-client-ca" (OuterVolumeSpecName: "client-ca") pod "d9a882b7-b656-49ef-8854-266b0c82f673" (UID: "d9a882b7-b656-49ef-8854-266b0c82f673"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.008313 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d9a882b7-b656-49ef-8854-266b0c82f673" (UID: "d9a882b7-b656-49ef-8854-266b0c82f673"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.008370 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-config" (OuterVolumeSpecName: "config") pod "d9a882b7-b656-49ef-8854-266b0c82f673" (UID: "d9a882b7-b656-49ef-8854-266b0c82f673"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.010929 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9a882b7-b656-49ef-8854-266b0c82f673-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d9a882b7-b656-49ef-8854-266b0c82f673" (UID: "d9a882b7-b656-49ef-8854-266b0c82f673"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.011352 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9a882b7-b656-49ef-8854-266b0c82f673-kube-api-access-6vcbz" (OuterVolumeSpecName: "kube-api-access-6vcbz") pod "d9a882b7-b656-49ef-8854-266b0c82f673" (UID: "d9a882b7-b656-49ef-8854-266b0c82f673"). InnerVolumeSpecName "kube-api-access-6vcbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.108229 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.108581 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.108592 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.108602 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vcbz\" (UniqueName: \"kubernetes.io/projected/d9a882b7-b656-49ef-8854-266b0c82f673-kube-api-access-6vcbz\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.108613 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a882b7-b656-49ef-8854-266b0c82f673-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.285927 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522"] Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.956592 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.956593 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" event={"ID":"41ae1fcb-c09f-4395-8e87-b5ae4206c608","Type":"ContainerStarted","Data":"136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1"} Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.957051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" event={"ID":"41ae1fcb-c09f-4395-8e87-b5ae4206c608","Type":"ContainerStarted","Data":"6d3634c5d597a3497f486967727d92eb13c45b2cf751ecb5f1c210a3ddc19e37"} Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.956658 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" podUID="41ae1fcb-c09f-4395-8e87-b5ae4206c608" containerName="route-controller-manager" containerID="cri-o://136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1" gracePeriod=30 Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.957095 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.964139 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.983041 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" podStartSLOduration=2.983021469 podStartE2EDuration="2.983021469s" podCreationTimestamp="2026-02-21 06:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:52:08.980041437 +0000 UTC m=+304.013125635" watchObservedRunningTime="2026-02-21 06:52:08.983021469 +0000 UTC m=+304.016105677" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.017616 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-846df49455-q45hw"] Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.018418 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.021687 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76464bf686-krxfn"] Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.025362 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76464bf686-krxfn"] Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.027749 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.028187 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.028368 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.028880 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-846df49455-q45hw"] Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.029310 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.029446 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.030525 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.039316 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.119112 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-config\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.119195 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-proxy-ca-bundles\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.119301 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1b83c5-e50f-463a-9392-497a22f7d844-serving-cert\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.119377 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-client-ca\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.119436 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/4c1b83c5-e50f-463a-9392-497a22f7d844-kube-api-access-fkbhv\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.220024 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-client-ca\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.220104 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/4c1b83c5-e50f-463a-9392-497a22f7d844-kube-api-access-fkbhv\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.220154 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-config\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.220182 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-proxy-ca-bundles\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.220214 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1b83c5-e50f-463a-9392-497a22f7d844-serving-cert\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.221462 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-client-ca\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.221962 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-config\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.222435 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-proxy-ca-bundles\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.240623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/4c1b83c5-e50f-463a-9392-497a22f7d844-kube-api-access-fkbhv\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.240926 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1b83c5-e50f-463a-9392-497a22f7d844-serving-cert\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.318743 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.382678 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.422391 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ae1fcb-c09f-4395-8e87-b5ae4206c608-serving-cert\") pod \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.422451 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs6z6\" (UniqueName: \"kubernetes.io/projected/41ae1fcb-c09f-4395-8e87-b5ae4206c608-kube-api-access-fs6z6\") pod \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.422497 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-config\") pod \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.422514 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-client-ca\") pod \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.423763 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-client-ca" (OuterVolumeSpecName: "client-ca") pod "41ae1fcb-c09f-4395-8e87-b5ae4206c608" (UID: "41ae1fcb-c09f-4395-8e87-b5ae4206c608"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.424692 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-config" (OuterVolumeSpecName: "config") pod "41ae1fcb-c09f-4395-8e87-b5ae4206c608" (UID: "41ae1fcb-c09f-4395-8e87-b5ae4206c608"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.426549 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ae1fcb-c09f-4395-8e87-b5ae4206c608-kube-api-access-fs6z6" (OuterVolumeSpecName: "kube-api-access-fs6z6") pod "41ae1fcb-c09f-4395-8e87-b5ae4206c608" (UID: "41ae1fcb-c09f-4395-8e87-b5ae4206c608"). InnerVolumeSpecName "kube-api-access-fs6z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.426626 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ae1fcb-c09f-4395-8e87-b5ae4206c608-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "41ae1fcb-c09f-4395-8e87-b5ae4206c608" (UID: "41ae1fcb-c09f-4395-8e87-b5ae4206c608"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.524117 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ae1fcb-c09f-4395-8e87-b5ae4206c608-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.524161 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs6z6\" (UniqueName: \"kubernetes.io/projected/41ae1fcb-c09f-4395-8e87-b5ae4206c608-kube-api-access-fs6z6\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.524180 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.524193 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.703116 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9a882b7-b656-49ef-8854-266b0c82f673" path="/var/lib/kubelet/pods/d9a882b7-b656-49ef-8854-266b0c82f673/volumes" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.776141 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-846df49455-q45hw"] Feb 21 06:52:09 crc kubenswrapper[4820]: W0221 06:52:09.785085 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c1b83c5_e50f_463a_9392_497a22f7d844.slice/crio-3c65bb6f0b7321134f5ddd184383c9786ea6ec661c4055a1127206c7f81cfbb6 WatchSource:0}: Error finding container 3c65bb6f0b7321134f5ddd184383c9786ea6ec661c4055a1127206c7f81cfbb6: Status 404 returned error can't find the container with id 3c65bb6f0b7321134f5ddd184383c9786ea6ec661c4055a1127206c7f81cfbb6 Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.963927 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" event={"ID":"4c1b83c5-e50f-463a-9392-497a22f7d844","Type":"ContainerStarted","Data":"5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f"} Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.964284 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" event={"ID":"4c1b83c5-e50f-463a-9392-497a22f7d844","Type":"ContainerStarted","Data":"3c65bb6f0b7321134f5ddd184383c9786ea6ec661c4055a1127206c7f81cfbb6"} Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.964304 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.966001 4820 generic.go:334] "Generic (PLEG): container finished" podID="41ae1fcb-c09f-4395-8e87-b5ae4206c608" containerID="136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1" exitCode=0 Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.966049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" event={"ID":"41ae1fcb-c09f-4395-8e87-b5ae4206c608","Type":"ContainerDied","Data":"136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1"} Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.966082 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" event={"ID":"41ae1fcb-c09f-4395-8e87-b5ae4206c608","Type":"ContainerDied","Data":"6d3634c5d597a3497f486967727d92eb13c45b2cf751ecb5f1c210a3ddc19e37"} Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.966105 4820 scope.go:117] "RemoveContainer" containerID="136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.966138 4820 patch_prober.go:28] interesting pod/controller-manager-846df49455-q45hw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.966177 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" podUID="4c1b83c5-e50f-463a-9392-497a22f7d844" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.966224 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.981116 4820 scope.go:117] "RemoveContainer" containerID="136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1" Feb 21 06:52:09 crc kubenswrapper[4820]: E0221 06:52:09.981684 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1\": container with ID starting with 136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1 not found: ID does not exist" containerID="136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.981741 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1"} err="failed to get container status \"136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1\": rpc error: code = NotFound desc = could not find container \"136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1\": container with ID starting with 136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1 not found: ID does not exist" Feb 21 06:52:10 crc kubenswrapper[4820]: I0221 06:52:10.001267 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" podStartSLOduration=3.001249095 podStartE2EDuration="3.001249095s" podCreationTimestamp="2026-02-21 06:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:52:09.984354149 +0000 UTC m=+305.017438337" watchObservedRunningTime="2026-02-21 06:52:10.001249095 +0000 UTC m=+305.034333313" Feb 21 06:52:10 crc kubenswrapper[4820]: I0221 06:52:10.002335 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522"] Feb 21 06:52:10 crc kubenswrapper[4820]: I0221 06:52:10.005318 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522"] Feb 21 06:52:10 crc kubenswrapper[4820]: I0221 06:52:10.976944 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.588304 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls"] Feb 21 06:52:11 crc kubenswrapper[4820]: E0221 06:52:11.588488 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ae1fcb-c09f-4395-8e87-b5ae4206c608" containerName="route-controller-manager" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.588499 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ae1fcb-c09f-4395-8e87-b5ae4206c608" containerName="route-controller-manager" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.588576 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ae1fcb-c09f-4395-8e87-b5ae4206c608" containerName="route-controller-manager" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.588879 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.591649 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.591715 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.591845 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.592014 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.597260 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.598891 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.616519 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls"] Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.702159 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ae1fcb-c09f-4395-8e87-b5ae4206c608" path="/var/lib/kubelet/pods/41ae1fcb-c09f-4395-8e87-b5ae4206c608/volumes" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.746841 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d190e9-6eb1-4655-9158-5e563b1e8c67-serving-cert\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.747206 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8d190e9-6eb1-4655-9158-5e563b1e8c67-client-ca\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.747272 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8d190e9-6eb1-4655-9158-5e563b1e8c67-config\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.747538 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlbsd\" (UniqueName: \"kubernetes.io/projected/b8d190e9-6eb1-4655-9158-5e563b1e8c67-kube-api-access-jlbsd\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.848919 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8d190e9-6eb1-4655-9158-5e563b1e8c67-client-ca\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.849022 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8d190e9-6eb1-4655-9158-5e563b1e8c67-config\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.849050 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlbsd\" (UniqueName: \"kubernetes.io/projected/b8d190e9-6eb1-4655-9158-5e563b1e8c67-kube-api-access-jlbsd\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.849089 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d190e9-6eb1-4655-9158-5e563b1e8c67-serving-cert\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.850411 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8d190e9-6eb1-4655-9158-5e563b1e8c67-client-ca\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.850553 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8d190e9-6eb1-4655-9158-5e563b1e8c67-config\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.855415 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d190e9-6eb1-4655-9158-5e563b1e8c67-serving-cert\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.870329 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlbsd\" (UniqueName: \"kubernetes.io/projected/b8d190e9-6eb1-4655-9158-5e563b1e8c67-kube-api-access-jlbsd\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.904102 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:12 crc kubenswrapper[4820]: I0221 06:52:12.302639 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls"] Feb 21 06:52:12 crc kubenswrapper[4820]: I0221 06:52:12.984957 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" event={"ID":"b8d190e9-6eb1-4655-9158-5e563b1e8c67","Type":"ContainerStarted","Data":"3cf04a51b2889b2ac1c3f0a671ba25776c4615dcf653a0e38f78b8ae1b0ab0df"} Feb 21 06:52:12 crc kubenswrapper[4820]: I0221 06:52:12.985303 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:12 crc kubenswrapper[4820]: I0221 06:52:12.985377 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" event={"ID":"b8d190e9-6eb1-4655-9158-5e563b1e8c67","Type":"ContainerStarted","Data":"21451f0c238797ca5f0d7d8299f9651f74de2a7ba8e74d1bb34d667341c5c6c2"} Feb 21 06:52:12 crc kubenswrapper[4820]: I0221 06:52:12.993813 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:13 crc kubenswrapper[4820]: I0221 06:52:13.009326 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" podStartSLOduration=6.009299899 podStartE2EDuration="6.009299899s" podCreationTimestamp="2026-02-21 06:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:52:13.006380198 +0000 UTC m=+308.039464406" watchObservedRunningTime="2026-02-21 06:52:13.009299899 +0000 UTC m=+308.042384097" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.521445 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-78dnb"] Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.523425 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.525526 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.534995 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78dnb"] Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.663614 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-utilities\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.663738 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-catalog-content\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.663793 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzsbl\" (UniqueName: \"kubernetes.io/projected/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-kube-api-access-qzsbl\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.715059 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-drqmx"] Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.716219 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.718225 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.725254 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drqmx"] Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.764663 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-catalog-content\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.764912 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzsbl\" (UniqueName: \"kubernetes.io/projected/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-kube-api-access-qzsbl\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.765036 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-utilities\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.765165 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-catalog-content\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.765417 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-utilities\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.781970 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzsbl\" (UniqueName: \"kubernetes.io/projected/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-kube-api-access-qzsbl\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.839807 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.866210 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf48h\" (UniqueName: \"kubernetes.io/projected/fa04064f-b88b-4b27-a882-1cbdae3d4485-kube-api-access-bf48h\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.866284 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa04064f-b88b-4b27-a882-1cbdae3d4485-utilities\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.866312 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa04064f-b88b-4b27-a882-1cbdae3d4485-catalog-content\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.967855 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf48h\" (UniqueName: \"kubernetes.io/projected/fa04064f-b88b-4b27-a882-1cbdae3d4485-kube-api-access-bf48h\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.968194 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa04064f-b88b-4b27-a882-1cbdae3d4485-utilities\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.968216 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa04064f-b88b-4b27-a882-1cbdae3d4485-catalog-content\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.968936 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa04064f-b88b-4b27-a882-1cbdae3d4485-catalog-content\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.969316 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa04064f-b88b-4b27-a882-1cbdae3d4485-utilities\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.991218 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf48h\" (UniqueName: \"kubernetes.io/projected/fa04064f-b88b-4b27-a882-1cbdae3d4485-kube-api-access-bf48h\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:22 crc kubenswrapper[4820]: I0221 06:52:22.029604 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:22 crc kubenswrapper[4820]: I0221 06:52:22.296579 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78dnb"] Feb 21 06:52:22 crc kubenswrapper[4820]: I0221 06:52:22.399192 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drqmx"] Feb 21 06:52:22 crc kubenswrapper[4820]: W0221 06:52:22.405885 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa04064f_b88b_4b27_a882_1cbdae3d4485.slice/crio-b649f0f405fce0d6057d0daaabde59f64ff8ae564a240b0051bf307d47f0d23a WatchSource:0}: Error finding container b649f0f405fce0d6057d0daaabde59f64ff8ae564a240b0051bf307d47f0d23a: Status 404 returned error can't find the container with id b649f0f405fce0d6057d0daaabde59f64ff8ae564a240b0051bf307d47f0d23a Feb 21 06:52:23 crc kubenswrapper[4820]: I0221 06:52:23.033751 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa04064f-b88b-4b27-a882-1cbdae3d4485" containerID="6a6e9bd95512a329ff09949fbe07a697ba7d08c16fc5af5f309fc9df573ee567" exitCode=0 Feb 21 06:52:23 crc kubenswrapper[4820]: I0221 06:52:23.033794 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drqmx" event={"ID":"fa04064f-b88b-4b27-a882-1cbdae3d4485","Type":"ContainerDied","Data":"6a6e9bd95512a329ff09949fbe07a697ba7d08c16fc5af5f309fc9df573ee567"} Feb 21 06:52:23 crc kubenswrapper[4820]: I0221 06:52:23.034043 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drqmx" event={"ID":"fa04064f-b88b-4b27-a882-1cbdae3d4485","Type":"ContainerStarted","Data":"b649f0f405fce0d6057d0daaabde59f64ff8ae564a240b0051bf307d47f0d23a"} Feb 21 06:52:23 crc kubenswrapper[4820]: I0221 06:52:23.035676 4820 generic.go:334] "Generic (PLEG): container finished" podID="ef1d43db-e76a-4d34-8528-4c549bcbc2e2" containerID="56d3fb72a651afcc482ce84462dcbdc39f11a6db791756eb6f34b267b246adeb" exitCode=0 Feb 21 06:52:23 crc kubenswrapper[4820]: I0221 06:52:23.035737 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78dnb" event={"ID":"ef1d43db-e76a-4d34-8528-4c549bcbc2e2","Type":"ContainerDied","Data":"56d3fb72a651afcc482ce84462dcbdc39f11a6db791756eb6f34b267b246adeb"} Feb 21 06:52:23 crc kubenswrapper[4820]: I0221 06:52:23.035776 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78dnb" event={"ID":"ef1d43db-e76a-4d34-8528-4c549bcbc2e2","Type":"ContainerStarted","Data":"f2a933260e1873367ab71f82c0867fdf99cb661ff8470019f13d861119167edb"} Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.041825 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drqmx" event={"ID":"fa04064f-b88b-4b27-a882-1cbdae3d4485","Type":"ContainerStarted","Data":"68fc377fe32fabd2d7d2adaf405193300ff4510b85660287c0d549b1e9d70b8e"} Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.043546 4820 generic.go:334] "Generic (PLEG): container finished" podID="ef1d43db-e76a-4d34-8528-4c549bcbc2e2" containerID="b7de4a0cd2fc1c3d14eccedb5dd85a9e8ac111167e09a2f35daea3b88303c058" exitCode=0 Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.043581 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78dnb" event={"ID":"ef1d43db-e76a-4d34-8528-4c549bcbc2e2","Type":"ContainerDied","Data":"b7de4a0cd2fc1c3d14eccedb5dd85a9e8ac111167e09a2f35daea3b88303c058"} Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.316713 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9t7gg"] Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.318388 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.321569 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.324958 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9t7gg"] Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.497772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tg77\" (UniqueName: \"kubernetes.io/projected/c232aa63-d98b-4e40-9efb-00e3eff02b50-kube-api-access-4tg77\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.497819 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c232aa63-d98b-4e40-9efb-00e3eff02b50-utilities\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.497853 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c232aa63-d98b-4e40-9efb-00e3eff02b50-catalog-content\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.513053 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5rj56"] Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.514314 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.516051 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.526863 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rj56"] Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.598896 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c232aa63-d98b-4e40-9efb-00e3eff02b50-catalog-content\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.599002 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tg77\" (UniqueName: \"kubernetes.io/projected/c232aa63-d98b-4e40-9efb-00e3eff02b50-kube-api-access-4tg77\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.599029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c232aa63-d98b-4e40-9efb-00e3eff02b50-utilities\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.599429 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c232aa63-d98b-4e40-9efb-00e3eff02b50-catalog-content\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.599512 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c232aa63-d98b-4e40-9efb-00e3eff02b50-utilities\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.628452 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tg77\" (UniqueName: \"kubernetes.io/projected/c232aa63-d98b-4e40-9efb-00e3eff02b50-kube-api-access-4tg77\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.632347 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.701896 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-utilities\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.702139 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-catalog-content\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.702169 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz5g7\" (UniqueName: \"kubernetes.io/projected/a72aad09-5c42-41f0-9699-9160d1750191-kube-api-access-fz5g7\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.803179 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz5g7\" (UniqueName: \"kubernetes.io/projected/a72aad09-5c42-41f0-9699-9160d1750191-kube-api-access-fz5g7\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.803408 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-utilities\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.803445 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-catalog-content\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.804222 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-utilities\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.804226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-catalog-content\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.819305 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz5g7\" (UniqueName: \"kubernetes.io/projected/a72aad09-5c42-41f0-9699-9160d1750191-kube-api-access-fz5g7\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.872905 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:25 crc kubenswrapper[4820]: I0221 06:52:25.017761 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9t7gg"] Feb 21 06:52:25 crc kubenswrapper[4820]: W0221 06:52:25.022229 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc232aa63_d98b_4e40_9efb_00e3eff02b50.slice/crio-998707af78ed66e3bad9215daba75d390c2a004e297638b72ade74076a612ac5 WatchSource:0}: Error finding container 998707af78ed66e3bad9215daba75d390c2a004e297638b72ade74076a612ac5: Status 404 returned error can't find the container with id 998707af78ed66e3bad9215daba75d390c2a004e297638b72ade74076a612ac5 Feb 21 06:52:25 crc kubenswrapper[4820]: I0221 06:52:25.053023 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa04064f-b88b-4b27-a882-1cbdae3d4485" containerID="68fc377fe32fabd2d7d2adaf405193300ff4510b85660287c0d549b1e9d70b8e" exitCode=0 Feb 21 06:52:25 crc kubenswrapper[4820]: I0221 06:52:25.053136 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drqmx" event={"ID":"fa04064f-b88b-4b27-a882-1cbdae3d4485","Type":"ContainerDied","Data":"68fc377fe32fabd2d7d2adaf405193300ff4510b85660287c0d549b1e9d70b8e"} Feb 21 06:52:25 crc kubenswrapper[4820]: I0221 06:52:25.065636 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78dnb" event={"ID":"ef1d43db-e76a-4d34-8528-4c549bcbc2e2","Type":"ContainerStarted","Data":"5430d4c162f7263a939de1d095ac3bf5d39348a0c744a761efac28f0cc8effb5"} Feb 21 06:52:25 crc kubenswrapper[4820]: I0221 06:52:25.067224 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9t7gg" event={"ID":"c232aa63-d98b-4e40-9efb-00e3eff02b50","Type":"ContainerStarted","Data":"998707af78ed66e3bad9215daba75d390c2a004e297638b72ade74076a612ac5"} Feb 21 06:52:25 crc kubenswrapper[4820]: I0221 06:52:25.091520 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-78dnb" podStartSLOduration=2.700033287 podStartE2EDuration="4.09150511s" podCreationTimestamp="2026-02-21 06:52:21 +0000 UTC" firstStartedPulling="2026-02-21 06:52:23.037859474 +0000 UTC m=+318.070943672" lastFinishedPulling="2026-02-21 06:52:24.429331297 +0000 UTC m=+319.462415495" observedRunningTime="2026-02-21 06:52:25.088554288 +0000 UTC m=+320.121638486" watchObservedRunningTime="2026-02-21 06:52:25.09150511 +0000 UTC m=+320.124589308" Feb 21 06:52:25 crc kubenswrapper[4820]: W0221 06:52:25.253047 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda72aad09_5c42_41f0_9699_9160d1750191.slice/crio-a241b80262b56f5d048ff4666a6e3d23fdf812bb1aab7c42d8d4b602a3f884d7 WatchSource:0}: Error finding container a241b80262b56f5d048ff4666a6e3d23fdf812bb1aab7c42d8d4b602a3f884d7: Status 404 returned error can't find the container with id a241b80262b56f5d048ff4666a6e3d23fdf812bb1aab7c42d8d4b602a3f884d7 Feb 21 06:52:25 crc kubenswrapper[4820]: I0221 06:52:25.257340 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rj56"] Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.074868 4820 generic.go:334] "Generic (PLEG): container finished" podID="c232aa63-d98b-4e40-9efb-00e3eff02b50" containerID="6ed649495fca2f2c93374d5f4a1bfc1f20fe3bdb03b312cb869d809b53f75547" exitCode=0 Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.074933 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9t7gg" event={"ID":"c232aa63-d98b-4e40-9efb-00e3eff02b50","Type":"ContainerDied","Data":"6ed649495fca2f2c93374d5f4a1bfc1f20fe3bdb03b312cb869d809b53f75547"} Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.077007 4820 generic.go:334] "Generic (PLEG): container finished" podID="a72aad09-5c42-41f0-9699-9160d1750191" containerID="0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6" exitCode=0 Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.077063 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rj56" event={"ID":"a72aad09-5c42-41f0-9699-9160d1750191","Type":"ContainerDied","Data":"0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6"} Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.077079 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rj56" event={"ID":"a72aad09-5c42-41f0-9699-9160d1750191","Type":"ContainerStarted","Data":"a241b80262b56f5d048ff4666a6e3d23fdf812bb1aab7c42d8d4b602a3f884d7"} Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.079777 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drqmx" event={"ID":"fa04064f-b88b-4b27-a882-1cbdae3d4485","Type":"ContainerStarted","Data":"a6cf8019d4731f9995f5537b11776a6813b13eb7017f2ac9c322d6e8903279ef"} Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.135318 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-drqmx" podStartSLOduration=2.770062496 podStartE2EDuration="5.13529716s" podCreationTimestamp="2026-02-21 06:52:21 +0000 UTC" firstStartedPulling="2026-02-21 06:52:23.03549813 +0000 UTC m=+318.068582328" lastFinishedPulling="2026-02-21 06:52:25.400732794 +0000 UTC m=+320.433816992" observedRunningTime="2026-02-21 06:52:26.131921666 +0000 UTC m=+321.165005864" watchObservedRunningTime="2026-02-21 06:52:26.13529716 +0000 UTC m=+321.168381378" Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.432630 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-846df49455-q45hw"] Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.432854 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" podUID="4c1b83c5-e50f-463a-9392-497a22f7d844" containerName="controller-manager" containerID="cri-o://5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f" gracePeriod=30 Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.004156 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.085828 4820 generic.go:334] "Generic (PLEG): container finished" podID="4c1b83c5-e50f-463a-9392-497a22f7d844" containerID="5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f" exitCode=0 Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.085907 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" event={"ID":"4c1b83c5-e50f-463a-9392-497a22f7d844","Type":"ContainerDied","Data":"5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f"} Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.085936 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.085950 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" event={"ID":"4c1b83c5-e50f-463a-9392-497a22f7d844","Type":"ContainerDied","Data":"3c65bb6f0b7321134f5ddd184383c9786ea6ec661c4055a1127206c7f81cfbb6"} Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.085968 4820 scope.go:117] "RemoveContainer" containerID="5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.087713 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9t7gg" event={"ID":"c232aa63-d98b-4e40-9efb-00e3eff02b50","Type":"ContainerStarted","Data":"696effc6c5d724b20b99c93296e965d3d554d24e7cd5d0d836853f496f5b7193"} Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.089289 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rj56" event={"ID":"a72aad09-5c42-41f0-9699-9160d1750191","Type":"ContainerStarted","Data":"d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766"} Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.101941 4820 scope.go:117] "RemoveContainer" containerID="5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f" Feb 21 06:52:27 crc kubenswrapper[4820]: E0221 06:52:27.102348 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f\": container with ID starting with 5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f not found: ID does not exist" containerID="5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.102379 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f"} err="failed to get container status \"5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f\": rpc error: code = NotFound desc = could not find container \"5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f\": container with ID starting with 5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f not found: ID does not exist" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.143127 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-proxy-ca-bundles\") pod \"4c1b83c5-e50f-463a-9392-497a22f7d844\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.143222 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-config\") pod \"4c1b83c5-e50f-463a-9392-497a22f7d844\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.143312 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1b83c5-e50f-463a-9392-497a22f7d844-serving-cert\") pod \"4c1b83c5-e50f-463a-9392-497a22f7d844\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.143346 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-client-ca\") pod \"4c1b83c5-e50f-463a-9392-497a22f7d844\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.143414 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/4c1b83c5-e50f-463a-9392-497a22f7d844-kube-api-access-fkbhv\") pod \"4c1b83c5-e50f-463a-9392-497a22f7d844\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.145989 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-config" (OuterVolumeSpecName: "config") pod "4c1b83c5-e50f-463a-9392-497a22f7d844" (UID: "4c1b83c5-e50f-463a-9392-497a22f7d844"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.146635 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-client-ca" (OuterVolumeSpecName: "client-ca") pod "4c1b83c5-e50f-463a-9392-497a22f7d844" (UID: "4c1b83c5-e50f-463a-9392-497a22f7d844"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.146660 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4c1b83c5-e50f-463a-9392-497a22f7d844" (UID: "4c1b83c5-e50f-463a-9392-497a22f7d844"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.150547 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1b83c5-e50f-463a-9392-497a22f7d844-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4c1b83c5-e50f-463a-9392-497a22f7d844" (UID: "4c1b83c5-e50f-463a-9392-497a22f7d844"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.150898 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1b83c5-e50f-463a-9392-497a22f7d844-kube-api-access-fkbhv" (OuterVolumeSpecName: "kube-api-access-fkbhv") pod "4c1b83c5-e50f-463a-9392-497a22f7d844" (UID: "4c1b83c5-e50f-463a-9392-497a22f7d844"). InnerVolumeSpecName "kube-api-access-fkbhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.245065 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1b83c5-e50f-463a-9392-497a22f7d844-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.245096 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.245106 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/4c1b83c5-e50f-463a-9392-497a22f7d844-kube-api-access-fkbhv\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.245117 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.245125 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.407862 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-846df49455-q45hw"] Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.410525 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-846df49455-q45hw"] Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.594700 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx"] Feb 21 06:52:27 crc kubenswrapper[4820]: E0221 06:52:27.594889 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1b83c5-e50f-463a-9392-497a22f7d844" containerName="controller-manager" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.594902 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1b83c5-e50f-463a-9392-497a22f7d844" containerName="controller-manager" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.595001 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1b83c5-e50f-463a-9392-497a22f7d844" containerName="controller-manager" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.595346 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.597205 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.597494 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.597755 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.597897 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.598058 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.600483 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.605860 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.609772 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx"] Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.703623 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1b83c5-e50f-463a-9392-497a22f7d844" path="/var/lib/kubelet/pods/4c1b83c5-e50f-463a-9392-497a22f7d844/volumes" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.751082 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-proxy-ca-bundles\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.751137 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9899582c-a7b1-446d-93da-ea8774aafbb3-serving-cert\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.751161 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggdq5\" (UniqueName: \"kubernetes.io/projected/9899582c-a7b1-446d-93da-ea8774aafbb3-kube-api-access-ggdq5\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.751196 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-config\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.751215 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-client-ca\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.852662 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-config\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.852743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-client-ca\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.852889 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-proxy-ca-bundles\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.852963 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9899582c-a7b1-446d-93da-ea8774aafbb3-serving-cert\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.853022 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggdq5\" (UniqueName: \"kubernetes.io/projected/9899582c-a7b1-446d-93da-ea8774aafbb3-kube-api-access-ggdq5\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.854024 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-config\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.854145 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-proxy-ca-bundles\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.854341 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-client-ca\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.860938 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9899582c-a7b1-446d-93da-ea8774aafbb3-serving-cert\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.871950 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggdq5\" (UniqueName: \"kubernetes.io/projected/9899582c-a7b1-446d-93da-ea8774aafbb3-kube-api-access-ggdq5\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.910878 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:28 crc kubenswrapper[4820]: I0221 06:52:28.097093 4820 generic.go:334] "Generic (PLEG): container finished" podID="c232aa63-d98b-4e40-9efb-00e3eff02b50" containerID="696effc6c5d724b20b99c93296e965d3d554d24e7cd5d0d836853f496f5b7193" exitCode=0 Feb 21 06:52:28 crc kubenswrapper[4820]: I0221 06:52:28.097191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9t7gg" event={"ID":"c232aa63-d98b-4e40-9efb-00e3eff02b50","Type":"ContainerDied","Data":"696effc6c5d724b20b99c93296e965d3d554d24e7cd5d0d836853f496f5b7193"} Feb 21 06:52:28 crc kubenswrapper[4820]: I0221 06:52:28.101008 4820 generic.go:334] "Generic (PLEG): container finished" podID="a72aad09-5c42-41f0-9699-9160d1750191" containerID="d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766" exitCode=0 Feb 21 06:52:28 crc kubenswrapper[4820]: I0221 06:52:28.101070 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rj56" event={"ID":"a72aad09-5c42-41f0-9699-9160d1750191","Type":"ContainerDied","Data":"d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766"} Feb 21 06:52:28 crc kubenswrapper[4820]: I0221 06:52:28.111736 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx"] Feb 21 06:52:28 crc kubenswrapper[4820]: W0221 06:52:28.122107 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9899582c_a7b1_446d_93da_ea8774aafbb3.slice/crio-2fcc555dae5574019615750a72ac737e4ed76ce8e5b3a9f7f0ecd1ffcd93bfec WatchSource:0}: Error finding container 2fcc555dae5574019615750a72ac737e4ed76ce8e5b3a9f7f0ecd1ffcd93bfec: Status 404 returned error can't find the container with id 2fcc555dae5574019615750a72ac737e4ed76ce8e5b3a9f7f0ecd1ffcd93bfec Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.109709 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9t7gg" event={"ID":"c232aa63-d98b-4e40-9efb-00e3eff02b50","Type":"ContainerStarted","Data":"761c69d1caddd72f68445ed8df3659e75dc11bab514b8c946d0870aadded4a76"} Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.112263 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rj56" event={"ID":"a72aad09-5c42-41f0-9699-9160d1750191","Type":"ContainerStarted","Data":"562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e"} Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.113594 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" event={"ID":"9899582c-a7b1-446d-93da-ea8774aafbb3","Type":"ContainerStarted","Data":"50e65abb62b18cae371ab979b7f718cc96d9a006b8e312855e460bf5cc338d73"} Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.113616 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" event={"ID":"9899582c-a7b1-446d-93da-ea8774aafbb3","Type":"ContainerStarted","Data":"2fcc555dae5574019615750a72ac737e4ed76ce8e5b3a9f7f0ecd1ffcd93bfec"} Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.113807 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.118695 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.130886 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9t7gg" podStartSLOduration=2.65180823 podStartE2EDuration="5.130868469s" podCreationTimestamp="2026-02-21 06:52:24 +0000 UTC" firstStartedPulling="2026-02-21 06:52:26.077297212 +0000 UTC m=+321.110381450" lastFinishedPulling="2026-02-21 06:52:28.556357491 +0000 UTC m=+323.589441689" observedRunningTime="2026-02-21 06:52:29.12907754 +0000 UTC m=+324.162161738" watchObservedRunningTime="2026-02-21 06:52:29.130868469 +0000 UTC m=+324.163952667" Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.148998 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5rj56" podStartSLOduration=2.696038818 podStartE2EDuration="5.148980908s" podCreationTimestamp="2026-02-21 06:52:24 +0000 UTC" firstStartedPulling="2026-02-21 06:52:26.079685478 +0000 UTC m=+321.112769716" lastFinishedPulling="2026-02-21 06:52:28.532627608 +0000 UTC m=+323.565711806" observedRunningTime="2026-02-21 06:52:29.146864149 +0000 UTC m=+324.179948347" watchObservedRunningTime="2026-02-21 06:52:29.148980908 +0000 UTC m=+324.182065106" Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.165155 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" podStartSLOduration=3.165140383 podStartE2EDuration="3.165140383s" podCreationTimestamp="2026-02-21 06:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:52:29.163312662 +0000 UTC m=+324.196396870" watchObservedRunningTime="2026-02-21 06:52:29.165140383 +0000 UTC m=+324.198224581" Feb 21 06:52:31 crc kubenswrapper[4820]: I0221 06:52:31.840141 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:31 crc kubenswrapper[4820]: I0221 06:52:31.841435 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:31 crc kubenswrapper[4820]: I0221 06:52:31.892921 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:32 crc kubenswrapper[4820]: I0221 06:52:32.030232 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:32 crc kubenswrapper[4820]: I0221 06:52:32.030536 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:32 crc kubenswrapper[4820]: I0221 06:52:32.064689 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:32 crc kubenswrapper[4820]: I0221 06:52:32.158723 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:32 crc kubenswrapper[4820]: I0221 06:52:32.163994 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.579875 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-p7z4h"] Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.580960 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.591085 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-p7z4h"] Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.639084 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.639141 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.684587 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759055 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759122 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-bound-sa-token\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759144 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4c3ac2e-5829-4263-a162-b2faf5943159-trusted-ca\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759193 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4c3ac2e-5829-4263-a162-b2faf5943159-ca-trust-extracted\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759212 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-registry-tls\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759226 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4c3ac2e-5829-4263-a162-b2faf5943159-installation-pull-secrets\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759265 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4c3ac2e-5829-4263-a162-b2faf5943159-registry-certificates\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759285 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrr4k\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-kube-api-access-hrr4k\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.791283 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.861824 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-bound-sa-token\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.861888 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4c3ac2e-5829-4263-a162-b2faf5943159-trusted-ca\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.861942 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4c3ac2e-5829-4263-a162-b2faf5943159-ca-trust-extracted\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.861962 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-registry-tls\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.861981 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4c3ac2e-5829-4263-a162-b2faf5943159-installation-pull-secrets\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.862000 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4c3ac2e-5829-4263-a162-b2faf5943159-registry-certificates\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.862019 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrr4k\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-kube-api-access-hrr4k\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.862728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4c3ac2e-5829-4263-a162-b2faf5943159-ca-trust-extracted\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.863938 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4c3ac2e-5829-4263-a162-b2faf5943159-trusted-ca\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.864503 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4c3ac2e-5829-4263-a162-b2faf5943159-registry-certificates\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.870470 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-registry-tls\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.873243 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.873276 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.874228 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4c3ac2e-5829-4263-a162-b2faf5943159-installation-pull-secrets\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.889409 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-bound-sa-token\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.893914 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrr4k\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-kube-api-access-hrr4k\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.920230 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.950831 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:35 crc kubenswrapper[4820]: I0221 06:52:35.189571 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:35 crc kubenswrapper[4820]: I0221 06:52:35.194407 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:35 crc kubenswrapper[4820]: W0221 06:52:35.359998 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4c3ac2e_5829_4263_a162_b2faf5943159.slice/crio-fc55f532f889ada41a7772d899a22329639a04ceb411d32cced7994f0e37c66b WatchSource:0}: Error finding container fc55f532f889ada41a7772d899a22329639a04ceb411d32cced7994f0e37c66b: Status 404 returned error can't find the container with id fc55f532f889ada41a7772d899a22329639a04ceb411d32cced7994f0e37c66b Feb 21 06:52:35 crc kubenswrapper[4820]: I0221 06:52:35.361678 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-p7z4h"] Feb 21 06:52:36 crc kubenswrapper[4820]: I0221 06:52:36.148319 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" event={"ID":"c4c3ac2e-5829-4263-a162-b2faf5943159","Type":"ContainerStarted","Data":"5eb62fb7015f20c443d01aace01866a0aafb4943f1ef07e517b9299add9c206a"} Feb 21 06:52:36 crc kubenswrapper[4820]: I0221 06:52:36.148370 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" event={"ID":"c4c3ac2e-5829-4263-a162-b2faf5943159","Type":"ContainerStarted","Data":"fc55f532f889ada41a7772d899a22329639a04ceb411d32cced7994f0e37c66b"} Feb 21 06:52:36 crc kubenswrapper[4820]: I0221 06:52:36.167069 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" podStartSLOduration=2.167048747 podStartE2EDuration="2.167048747s" podCreationTimestamp="2026-02-21 06:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:52:36.16464669 +0000 UTC m=+331.197730888" watchObservedRunningTime="2026-02-21 06:52:36.167048747 +0000 UTC m=+331.200132955" Feb 21 06:52:37 crc kubenswrapper[4820]: I0221 06:52:37.154171 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:54 crc kubenswrapper[4820]: I0221 06:52:54.958818 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:55 crc kubenswrapper[4820]: I0221 06:52:55.024943 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-566bt"] Feb 21 06:53:13 crc kubenswrapper[4820]: I0221 06:53:13.816713 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:53:13 crc kubenswrapper[4820]: I0221 06:53:13.817135 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.075870 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" podUID="bcdc0b91-9179-44c7-9e5d-beb73c2b1110" containerName="registry" containerID="cri-o://8048ccd2f14f2f271de65f71a2e6fa5f3c462cfe55114a86890015f00eed03c6" gracePeriod=30 Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.408391 4820 generic.go:334] "Generic (PLEG): container finished" podID="bcdc0b91-9179-44c7-9e5d-beb73c2b1110" containerID="8048ccd2f14f2f271de65f71a2e6fa5f3c462cfe55114a86890015f00eed03c6" exitCode=0 Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.408483 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" event={"ID":"bcdc0b91-9179-44c7-9e5d-beb73c2b1110","Type":"ContainerDied","Data":"8048ccd2f14f2f271de65f71a2e6fa5f3c462cfe55114a86890015f00eed03c6"} Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.408742 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" event={"ID":"bcdc0b91-9179-44c7-9e5d-beb73c2b1110","Type":"ContainerDied","Data":"b598b1cdbe0f9e05c67729eff4eb4e0b676f67f494000629fbc22161406ca524"} Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.408764 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b598b1cdbe0f9e05c67729eff4eb4e0b676f67f494000629fbc22161406ca524" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.427493 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.598300 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.598337 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-tls\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.598373 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-ca-trust-extracted\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.611535 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.611911 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-kube-api-access-g6nlg" (OuterVolumeSpecName: "kube-api-access-g6nlg") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "kube-api-access-g6nlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.613802 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618036 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618153 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6nlg\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-kube-api-access-g6nlg\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618274 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-installation-pull-secrets\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618313 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-trusted-ca\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618390 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-certificates\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618413 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-bound-sa-token\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618771 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6nlg\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-kube-api-access-g6nlg\") on node \"crc\" DevicePath \"\"" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618795 4820 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618810 4820 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.620155 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.620278 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.622396 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.623071 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.720147 4820 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.720188 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.720205 4820 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.720223 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 21 06:53:21 crc kubenswrapper[4820]: I0221 06:53:21.413655 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:53:21 crc kubenswrapper[4820]: I0221 06:53:21.443178 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-566bt"] Feb 21 06:53:21 crc kubenswrapper[4820]: I0221 06:53:21.458579 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-566bt"] Feb 21 06:53:21 crc kubenswrapper[4820]: I0221 06:53:21.701924 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcdc0b91-9179-44c7-9e5d-beb73c2b1110" path="/var/lib/kubelet/pods/bcdc0b91-9179-44c7-9e5d-beb73c2b1110/volumes" Feb 21 06:53:43 crc kubenswrapper[4820]: I0221 06:53:43.816285 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:53:43 crc kubenswrapper[4820]: I0221 06:53:43.816918 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:54:13 crc kubenswrapper[4820]: I0221 06:54:13.816724 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:54:13 crc kubenswrapper[4820]: I0221 06:54:13.818533 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:54:13 crc kubenswrapper[4820]: I0221 06:54:13.818972 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:54:13 crc kubenswrapper[4820]: I0221 06:54:13.819552 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32de9642d140b335669b1a18ad1b94d3e3f2b36b555260b47b1a72446c7842fb"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 06:54:13 crc kubenswrapper[4820]: I0221 06:54:13.819676 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://32de9642d140b335669b1a18ad1b94d3e3f2b36b555260b47b1a72446c7842fb" gracePeriod=600 Feb 21 06:54:14 crc kubenswrapper[4820]: I0221 06:54:14.749487 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="32de9642d140b335669b1a18ad1b94d3e3f2b36b555260b47b1a72446c7842fb" exitCode=0 Feb 21 06:54:14 crc kubenswrapper[4820]: I0221 06:54:14.749596 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"32de9642d140b335669b1a18ad1b94d3e3f2b36b555260b47b1a72446c7842fb"} Feb 21 06:54:14 crc kubenswrapper[4820]: I0221 06:54:14.750301 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"3fc9b08aad2edad9a74ca93f30446b530336f95338e8d6ab6b9d614b704623df"} Feb 21 06:54:14 crc kubenswrapper[4820]: I0221 06:54:14.750443 4820 scope.go:117] "RemoveContainer" containerID="04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb" Feb 21 06:56:05 crc kubenswrapper[4820]: I0221 06:56:05.839770 4820 scope.go:117] "RemoveContainer" containerID="8048ccd2f14f2f271de65f71a2e6fa5f3c462cfe55114a86890015f00eed03c6" Feb 21 06:56:43 crc kubenswrapper[4820]: I0221 06:56:43.816938 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:56:43 crc kubenswrapper[4820]: I0221 06:56:43.817715 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:57:13 crc kubenswrapper[4820]: I0221 06:57:13.816379 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:57:13 crc kubenswrapper[4820]: I0221 06:57:13.817050 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:57:43 crc kubenswrapper[4820]: I0221 06:57:43.816674 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:57:43 crc kubenswrapper[4820]: I0221 06:57:43.819135 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:57:43 crc kubenswrapper[4820]: I0221 06:57:43.819443 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:57:43 crc kubenswrapper[4820]: I0221 06:57:43.820526 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3fc9b08aad2edad9a74ca93f30446b530336f95338e8d6ab6b9d614b704623df"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 06:57:43 crc kubenswrapper[4820]: I0221 06:57:43.820790 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://3fc9b08aad2edad9a74ca93f30446b530336f95338e8d6ab6b9d614b704623df" gracePeriod=600 Feb 21 06:57:44 crc kubenswrapper[4820]: I0221 06:57:44.203221 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="3fc9b08aad2edad9a74ca93f30446b530336f95338e8d6ab6b9d614b704623df" exitCode=0 Feb 21 06:57:44 crc kubenswrapper[4820]: I0221 06:57:44.203289 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"3fc9b08aad2edad9a74ca93f30446b530336f95338e8d6ab6b9d614b704623df"} Feb 21 06:57:44 crc kubenswrapper[4820]: I0221 06:57:44.203667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"71784da7c98d1c6a1f3631b050c692e6a08e77f49190060892784c827a17df19"} Feb 21 06:57:44 crc kubenswrapper[4820]: I0221 06:57:44.203700 4820 scope.go:117] "RemoveContainer" containerID="32de9642d140b335669b1a18ad1b94d3e3f2b36b555260b47b1a72446c7842fb" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.615724 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bvfjp"] Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.616544 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-controller" containerID="cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.616931 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="sbdb" containerID="cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.616966 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="nbdb" containerID="cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.616993 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="northd" containerID="cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.617022 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.617049 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-node" containerID="cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.617078 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-acl-logging" containerID="cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.674546 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" containerID="cri-o://e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.901257 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/3.log" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.904055 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovn-acl-logging/0.log" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.904688 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovn-controller/0.log" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.905076 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.956729 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d924v"] Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.956907 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-acl-logging" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.956918 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-acl-logging" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.956927 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="nbdb" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.956933 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="nbdb" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.956939 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.956946 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.956955 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.956963 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.956973 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kubecfg-setup" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.956980 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kubecfg-setup" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.956990 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.956999 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957009 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957014 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957023 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-ovn-metrics" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957029 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-ovn-metrics" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957037 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-node" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957042 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-node" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957051 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="sbdb" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957057 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="sbdb" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957064 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="northd" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957069 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="northd" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957082 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcdc0b91-9179-44c7-9e5d-beb73c2b1110" containerName="registry" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957087 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdc0b91-9179-44c7-9e5d-beb73c2b1110" containerName="registry" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957169 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957179 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-ovn-metrics" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957187 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957194 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="northd" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957204 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957211 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-acl-logging" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957217 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-node" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957224 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="nbdb" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957233 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcdc0b91-9179-44c7-9e5d-beb73c2b1110" containerName="registry" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957259 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="sbdb" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957269 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957350 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957356 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957365 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957371 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957446 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957455 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.959866 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035756 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-bin\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035826 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035856 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-script-lib\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035874 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-systemd-units\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035887 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-slash\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035907 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-var-lib-openvswitch\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035924 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-kubelet\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035917 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035963 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-etc-openvswitch\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035979 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035988 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-node-log\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036003 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-systemd\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036005 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036016 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-ovn\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036027 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036042 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wgvx\" (UniqueName: \"kubernetes.io/projected/a70ec449-ba11-47dd-a60c-f77993670045-kube-api-access-2wgvx\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036049 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-slash" (OuterVolumeSpecName: "host-slash") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036064 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-config\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036070 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-node-log" (OuterVolumeSpecName: "node-log") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036077 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-openvswitch\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036089 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036092 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-netns\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036134 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036149 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-netd\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036171 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a70ec449-ba11-47dd-a60c-f77993670045-ovn-node-metrics-cert\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036195 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-env-overrides\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036219 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-ovn-kubernetes\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036251 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-log-socket\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036389 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036432 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036456 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036430 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036575 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036580 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-log-socket" (OuterVolumeSpecName: "log-socket") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036858 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036874 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036992 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037631 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037652 4820 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037661 4820 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037670 4820 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037680 4820 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037688 4820 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-log-socket\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037696 4820 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037705 4820 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037716 4820 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037751 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037762 4820 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037771 4820 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-slash\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037779 4820 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037789 4820 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037799 4820 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037809 4820 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-node-log\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037820 4820 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.041448 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a70ec449-ba11-47dd-a60c-f77993670045-kube-api-access-2wgvx" (OuterVolumeSpecName: "kube-api-access-2wgvx") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "kube-api-access-2wgvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.041480 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a70ec449-ba11-47dd-a60c-f77993670045-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.048523 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.138547 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.138878 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-kubelet\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.138900 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-node-log\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.138924 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-etc-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.138949 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-systemd-units\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.138971 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-env-overrides\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139020 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-ovn\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139077 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-run-ovn-kubernetes\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139101 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qtx6\" (UniqueName: \"kubernetes.io/projected/3861e6c5-94cc-44f1-b27b-96163c33ab85-kube-api-access-4qtx6\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139128 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-cni-bin\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-var-lib-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139164 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-cni-netd\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139180 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-slash\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139204 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139223 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-log-socket\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139259 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovnkube-config\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139289 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovn-node-metrics-cert\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139303 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-systemd\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139341 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovnkube-script-lib\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139357 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-run-netns\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139406 4820 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139419 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wgvx\" (UniqueName: \"kubernetes.io/projected/a70ec449-ba11-47dd-a60c-f77993670045-kube-api-access-2wgvx\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139429 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a70ec449-ba11-47dd-a60c-f77993670045-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-ovn\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240123 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-run-ovn-kubernetes\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240142 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qtx6\" (UniqueName: \"kubernetes.io/projected/3861e6c5-94cc-44f1-b27b-96163c33ab85-kube-api-access-4qtx6\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240164 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-cni-bin\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240180 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-var-lib-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240198 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-cni-netd\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240205 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-ovn\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240223 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-run-ovn-kubernetes\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240213 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-slash\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240278 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-slash\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240305 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-cni-netd\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240309 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-var-lib-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240343 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-cni-bin\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240340 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240315 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240484 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-log-socket\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240512 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovnkube-config\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240535 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovn-node-metrics-cert\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240570 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-log-socket\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241315 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovnkube-config\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241443 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-systemd\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241477 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-systemd\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovnkube-script-lib\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241560 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-run-netns\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241666 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-run-netns\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241691 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241721 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-kubelet\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241780 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-node-log\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241723 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241822 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-kubelet\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-node-log\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241930 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-etc-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.242012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-etc-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.242110 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-env-overrides\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.242169 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-systemd-units\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.242186 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovnkube-script-lib\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.243306 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-env-overrides\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.242738 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-systemd-units\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.244357 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovn-node-metrics-cert\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.255078 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qtx6\" (UniqueName: \"kubernetes.io/projected/3861e6c5-94cc-44f1-b27b-96163c33ab85-kube-api-access-4qtx6\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.289294 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.317635 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/3.log" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.319924 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovn-acl-logging/0.log" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.321278 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovn-controller/0.log" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322043 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" exitCode=0 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322063 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" exitCode=0 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322070 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" exitCode=0 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322077 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" exitCode=0 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322084 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" exitCode=0 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322090 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" exitCode=0 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322096 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" exitCode=143 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322103 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" exitCode=143 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322117 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322145 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322185 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322203 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322215 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322229 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322263 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322277 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322289 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322296 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322303 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322310 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322316 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322322 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322329 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322335 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322344 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322358 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322366 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322373 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322380 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322387 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322393 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322399 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322406 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322412 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322419 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322430 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322441 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322450 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322456 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322463 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322469 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322477 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322483 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322489 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322496 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322502 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322512 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"118b64efb54199ff43507f06d1575b956885db91aab695f62818a8cb0302061c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322523 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322531 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322537 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322544 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322552 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322557 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322564 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322570 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322576 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322583 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322348 4820 scope.go:117] "RemoveContainer" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.324120 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"b1c3f48ef79ebc2d14aecaf5e95da135ec3aa850d9709fde134f32e2af04e50f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.337645 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/2.log" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.339256 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/1.log" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.339373 4820 generic.go:334] "Generic (PLEG): container finished" podID="abdb469c-ba72-4790-9ce3-785f4facbcb9" containerID="03d0a6e2d37266d0266ccb9f72a6efebcd4bdac32c4b5bd8e9b6a73ba841b1e2" exitCode=2 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.339468 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerDied","Data":"03d0a6e2d37266d0266ccb9f72a6efebcd4bdac32c4b5bd8e9b6a73ba841b1e2"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.339527 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.340054 4820 scope.go:117] "RemoveContainer" containerID="03d0a6e2d37266d0266ccb9f72a6efebcd4bdac32c4b5bd8e9b6a73ba841b1e2" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.340229 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-94gxr_openshift-multus(abdb469c-ba72-4790-9ce3-785f4facbcb9)\"" pod="openshift-multus/multus-94gxr" podUID="abdb469c-ba72-4790-9ce3-785f4facbcb9" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.349916 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.358657 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bvfjp"] Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.361894 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bvfjp"] Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.394139 4820 scope.go:117] "RemoveContainer" containerID="2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.418852 4820 scope.go:117] "RemoveContainer" containerID="1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.432473 4820 scope.go:117] "RemoveContainer" containerID="e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.444005 4820 scope.go:117] "RemoveContainer" containerID="8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.458301 4820 scope.go:117] "RemoveContainer" containerID="50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.504040 4820 scope.go:117] "RemoveContainer" containerID="2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.515401 4820 scope.go:117] "RemoveContainer" containerID="d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.528810 4820 scope.go:117] "RemoveContainer" containerID="e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.542771 4820 scope.go:117] "RemoveContainer" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.543170 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": container with ID starting with e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be not found: ID does not exist" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.543201 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} err="failed to get container status \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": rpc error: code = NotFound desc = could not find container \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": container with ID starting with e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.543225 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.543627 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": container with ID starting with 8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f not found: ID does not exist" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.543668 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} err="failed to get container status \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": rpc error: code = NotFound desc = could not find container \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": container with ID starting with 8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.543694 4820 scope.go:117] "RemoveContainer" containerID="2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.543964 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": container with ID starting with 2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d not found: ID does not exist" containerID="2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.544174 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} err="failed to get container status \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": rpc error: code = NotFound desc = could not find container \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": container with ID starting with 2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.544283 4820 scope.go:117] "RemoveContainer" containerID="1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.544649 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": container with ID starting with 1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d not found: ID does not exist" containerID="1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.544674 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} err="failed to get container status \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": rpc error: code = NotFound desc = could not find container \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": container with ID starting with 1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.544688 4820 scope.go:117] "RemoveContainer" containerID="e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.545045 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": container with ID starting with e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c not found: ID does not exist" containerID="e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.545133 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} err="failed to get container status \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": rpc error: code = NotFound desc = could not find container \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": container with ID starting with e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.545204 4820 scope.go:117] "RemoveContainer" containerID="8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.545556 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": container with ID starting with 8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f not found: ID does not exist" containerID="8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.545580 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} err="failed to get container status \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": rpc error: code = NotFound desc = could not find container \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": container with ID starting with 8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.545595 4820 scope.go:117] "RemoveContainer" containerID="50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.545913 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": container with ID starting with 50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb not found: ID does not exist" containerID="50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.545947 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} err="failed to get container status \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": rpc error: code = NotFound desc = could not find container \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": container with ID starting with 50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.545967 4820 scope.go:117] "RemoveContainer" containerID="2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.546306 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": container with ID starting with 2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553 not found: ID does not exist" containerID="2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.546378 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} err="failed to get container status \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": rpc error: code = NotFound desc = could not find container \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": container with ID starting with 2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.546450 4820 scope.go:117] "RemoveContainer" containerID="d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.546761 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": container with ID starting with d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29 not found: ID does not exist" containerID="d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.546786 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} err="failed to get container status \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": rpc error: code = NotFound desc = could not find container \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": container with ID starting with d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.546801 4820 scope.go:117] "RemoveContainer" containerID="e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.547018 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": container with ID starting with e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c not found: ID does not exist" containerID="e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547040 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} err="failed to get container status \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": rpc error: code = NotFound desc = could not find container \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": container with ID starting with e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547054 4820 scope.go:117] "RemoveContainer" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547260 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} err="failed to get container status \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": rpc error: code = NotFound desc = could not find container \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": container with ID starting with e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547285 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547458 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} err="failed to get container status \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": rpc error: code = NotFound desc = could not find container \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": container with ID starting with 8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547478 4820 scope.go:117] "RemoveContainer" containerID="2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547647 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} err="failed to get container status \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": rpc error: code = NotFound desc = could not find container \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": container with ID starting with 2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547672 4820 scope.go:117] "RemoveContainer" containerID="1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547849 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} err="failed to get container status \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": rpc error: code = NotFound desc = could not find container \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": container with ID starting with 1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547872 4820 scope.go:117] "RemoveContainer" containerID="e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.548063 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} err="failed to get container status \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": rpc error: code = NotFound desc = could not find container \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": container with ID starting with e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.548087 4820 scope.go:117] "RemoveContainer" containerID="8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.548395 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} err="failed to get container status \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": rpc error: code = NotFound desc = could not find container \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": container with ID starting with 8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.548468 4820 scope.go:117] "RemoveContainer" containerID="50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.548777 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} err="failed to get container status \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": rpc error: code = NotFound desc = could not find container \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": container with ID starting with 50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.548852 4820 scope.go:117] "RemoveContainer" containerID="2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.549150 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} err="failed to get container status \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": rpc error: code = NotFound desc = could not find container \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": container with ID starting with 2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.549223 4820 scope.go:117] "RemoveContainer" containerID="d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.549593 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} err="failed to get container status \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": rpc error: code = NotFound desc = could not find container \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": container with ID starting with d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.549663 4820 scope.go:117] "RemoveContainer" containerID="e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.549957 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} err="failed to get container status \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": rpc error: code = NotFound desc = could not find container \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": container with ID starting with e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.550022 4820 scope.go:117] "RemoveContainer" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.550328 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} err="failed to get container status \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": rpc error: code = NotFound desc = could not find container \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": container with ID starting with e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.550355 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.550628 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} err="failed to get container status \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": rpc error: code = NotFound desc = could not find container \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": container with ID starting with 8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.550651 4820 scope.go:117] "RemoveContainer" containerID="2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.550922 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} err="failed to get container status \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": rpc error: code = NotFound desc = could not find container \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": container with ID starting with 2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.550946 4820 scope.go:117] "RemoveContainer" containerID="1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.551212 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} err="failed to get container status \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": rpc error: code = NotFound desc = could not find container \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": container with ID starting with 1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.551245 4820 scope.go:117] "RemoveContainer" containerID="e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.551475 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} err="failed to get container status \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": rpc error: code = NotFound desc = could not find container \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": container with ID starting with e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.551542 4820 scope.go:117] "RemoveContainer" containerID="8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.551774 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} err="failed to get container status \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": rpc error: code = NotFound desc = could not find container \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": container with ID starting with 8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.551795 4820 scope.go:117] "RemoveContainer" containerID="50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552054 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} err="failed to get container status \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": rpc error: code = NotFound desc = could not find container \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": container with ID starting with 50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552130 4820 scope.go:117] "RemoveContainer" containerID="2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552436 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} err="failed to get container status \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": rpc error: code = NotFound desc = could not find container \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": container with ID starting with 2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552454 4820 scope.go:117] "RemoveContainer" containerID="d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552661 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} err="failed to get container status \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": rpc error: code = NotFound desc = could not find container \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": container with ID starting with d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552691 4820 scope.go:117] "RemoveContainer" containerID="e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552874 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} err="failed to get container status \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": rpc error: code = NotFound desc = could not find container \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": container with ID starting with e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552900 4820 scope.go:117] "RemoveContainer" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.553153 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} err="failed to get container status \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": rpc error: code = NotFound desc = could not find container \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": container with ID starting with e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.553171 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.553353 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} err="failed to get container status \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": rpc error: code = NotFound desc = could not find container \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": container with ID starting with 8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.553374 4820 scope.go:117] "RemoveContainer" containerID="2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.553588 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} err="failed to get container status \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": rpc error: code = NotFound desc = could not find container \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": container with ID starting with 2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.553659 4820 scope.go:117] "RemoveContainer" containerID="1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.554052 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} err="failed to get container status \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": rpc error: code = NotFound desc = could not find container \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": container with ID starting with 1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.554074 4820 scope.go:117] "RemoveContainer" containerID="e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.554279 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} err="failed to get container status \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": rpc error: code = NotFound desc = could not find container \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": container with ID starting with e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.554361 4820 scope.go:117] "RemoveContainer" containerID="8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.554741 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} err="failed to get container status \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": rpc error: code = NotFound desc = could not find container \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": container with ID starting with 8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.554763 4820 scope.go:117] "RemoveContainer" containerID="50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.555015 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} err="failed to get container status \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": rpc error: code = NotFound desc = could not find container \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": container with ID starting with 50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.555053 4820 scope.go:117] "RemoveContainer" containerID="2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.555321 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} err="failed to get container status \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": rpc error: code = NotFound desc = could not find container \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": container with ID starting with 2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.555421 4820 scope.go:117] "RemoveContainer" containerID="d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.555691 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} err="failed to get container status \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": rpc error: code = NotFound desc = could not find container \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": container with ID starting with d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.555764 4820 scope.go:117] "RemoveContainer" containerID="e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.556091 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} err="failed to get container status \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": rpc error: code = NotFound desc = could not find container \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": container with ID starting with e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.556159 4820 scope.go:117] "RemoveContainer" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.556496 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} err="failed to get container status \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": rpc error: code = NotFound desc = could not find container \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": container with ID starting with e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be not found: ID does not exist" Feb 21 06:58:05 crc kubenswrapper[4820]: I0221 06:58:05.349402 4820 generic.go:334] "Generic (PLEG): container finished" podID="3861e6c5-94cc-44f1-b27b-96163c33ab85" containerID="ee6f5fa8750a1ee1efcba99330f9bb791e174552d0c87f21c98cd536e693c862" exitCode=0 Feb 21 06:58:05 crc kubenswrapper[4820]: I0221 06:58:05.349454 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerDied","Data":"ee6f5fa8750a1ee1efcba99330f9bb791e174552d0c87f21c98cd536e693c862"} Feb 21 06:58:05 crc kubenswrapper[4820]: I0221 06:58:05.709382 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a70ec449-ba11-47dd-a60c-f77993670045" path="/var/lib/kubelet/pods/a70ec449-ba11-47dd-a60c-f77993670045/volumes" Feb 21 06:58:05 crc kubenswrapper[4820]: I0221 06:58:05.896454 4820 scope.go:117] "RemoveContainer" containerID="e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf" Feb 21 06:58:06 crc kubenswrapper[4820]: I0221 06:58:06.358769 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"6d4dc05deddf61f0b8daccaf3beab53531d75cd199d87e0b70b97f256c6c01ba"} Feb 21 06:58:06 crc kubenswrapper[4820]: I0221 06:58:06.358809 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"d09400320f3883a4352f27a47ffe4d03d977a6d7b40e67b3560bb9452ee99886"} Feb 21 06:58:06 crc kubenswrapper[4820]: I0221 06:58:06.358819 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"b6ed13dfdcb6e914f2130d66e24190d91eccbda618a97ed7f70f83825b40f0b4"} Feb 21 06:58:06 crc kubenswrapper[4820]: I0221 06:58:06.358828 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"2cb2bd6d50c8de18b59cc21b6a0c4887e7ebd696f2a63479a4c81c810822d2e9"} Feb 21 06:58:06 crc kubenswrapper[4820]: I0221 06:58:06.358836 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"b54362445fc40925aa125900cd29e1c644294f051192bcf5879cfac01bdf6c15"} Feb 21 06:58:06 crc kubenswrapper[4820]: I0221 06:58:06.358846 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"0b5c1f4165e08b1a3efa1246bf1a60990f1a69593041a25746961924a3c923c6"} Feb 21 06:58:06 crc kubenswrapper[4820]: I0221 06:58:06.359987 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/2.log" Feb 21 06:58:08 crc kubenswrapper[4820]: I0221 06:58:08.378310 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"aeb6e959ef2d828f5d648dd9c17bd8168f92c3984f4453b4a892180018b8640f"} Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.187515 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-p4pxl"] Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.188477 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.190622 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.190675 4820 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-45wzb" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.190622 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.192883 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.298067 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3c764255-4b53-476b-ad40-4bd38c76f92c-node-mnt\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.298136 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9lqj\" (UniqueName: \"kubernetes.io/projected/3c764255-4b53-476b-ad40-4bd38c76f92c-kube-api-access-r9lqj\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.298174 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3c764255-4b53-476b-ad40-4bd38c76f92c-crc-storage\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.399201 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9lqj\" (UniqueName: \"kubernetes.io/projected/3c764255-4b53-476b-ad40-4bd38c76f92c-kube-api-access-r9lqj\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.400449 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3c764255-4b53-476b-ad40-4bd38c76f92c-crc-storage\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.400571 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3c764255-4b53-476b-ad40-4bd38c76f92c-node-mnt\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.400779 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3c764255-4b53-476b-ad40-4bd38c76f92c-node-mnt\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.401473 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3c764255-4b53-476b-ad40-4bd38c76f92c-crc-storage\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.421523 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9lqj\" (UniqueName: \"kubernetes.io/projected/3c764255-4b53-476b-ad40-4bd38c76f92c-kube-api-access-r9lqj\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.506721 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: E0221 06:58:09.530299 4820 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(1def9586a0ca423895b172ba36815c010e2e9b8442298941cd3998e549cce198): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 21 06:58:09 crc kubenswrapper[4820]: E0221 06:58:09.530395 4820 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(1def9586a0ca423895b172ba36815c010e2e9b8442298941cd3998e549cce198): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: E0221 06:58:09.530433 4820 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(1def9586a0ca423895b172ba36815c010e2e9b8442298941cd3998e549cce198): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: E0221 06:58:09.530556 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-p4pxl_crc-storage(3c764255-4b53-476b-ad40-4bd38c76f92c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-p4pxl_crc-storage(3c764255-4b53-476b-ad40-4bd38c76f92c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(1def9586a0ca423895b172ba36815c010e2e9b8442298941cd3998e549cce198): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-p4pxl" podUID="3c764255-4b53-476b-ad40-4bd38c76f92c" Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.234543 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-p4pxl"] Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.234908 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.235302 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:11 crc kubenswrapper[4820]: E0221 06:58:11.273623 4820 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(c951b49ef014d0744f68703d3f4268ea9ba4e457487aa8f9a381e86f5880f477): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 21 06:58:11 crc kubenswrapper[4820]: E0221 06:58:11.273681 4820 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(c951b49ef014d0744f68703d3f4268ea9ba4e457487aa8f9a381e86f5880f477): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:11 crc kubenswrapper[4820]: E0221 06:58:11.273701 4820 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(c951b49ef014d0744f68703d3f4268ea9ba4e457487aa8f9a381e86f5880f477): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:11 crc kubenswrapper[4820]: E0221 06:58:11.273738 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-p4pxl_crc-storage(3c764255-4b53-476b-ad40-4bd38c76f92c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-p4pxl_crc-storage(3c764255-4b53-476b-ad40-4bd38c76f92c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(c951b49ef014d0744f68703d3f4268ea9ba4e457487aa8f9a381e86f5880f477): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-p4pxl" podUID="3c764255-4b53-476b-ad40-4bd38c76f92c" Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.397340 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"d38071d605b64db3a9a45cc342615932511de64d66b67e3f79ee517d8edba6a2"} Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.397977 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.398087 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.447215 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.516536 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" podStartSLOduration=8.516515383 podStartE2EDuration="8.516515383s" podCreationTimestamp="2026-02-21 06:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:58:11.437288349 +0000 UTC m=+666.470372547" watchObservedRunningTime="2026-02-21 06:58:11.516515383 +0000 UTC m=+666.549599581" Feb 21 06:58:12 crc kubenswrapper[4820]: I0221 06:58:12.404464 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:12 crc kubenswrapper[4820]: I0221 06:58:12.482754 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:18 crc kubenswrapper[4820]: I0221 06:58:18.696558 4820 scope.go:117] "RemoveContainer" containerID="03d0a6e2d37266d0266ccb9f72a6efebcd4bdac32c4b5bd8e9b6a73ba841b1e2" Feb 21 06:58:18 crc kubenswrapper[4820]: E0221 06:58:18.697037 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-94gxr_openshift-multus(abdb469c-ba72-4790-9ce3-785f4facbcb9)\"" pod="openshift-multus/multus-94gxr" podUID="abdb469c-ba72-4790-9ce3-785f4facbcb9" Feb 21 06:58:23 crc kubenswrapper[4820]: I0221 06:58:23.696548 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:23 crc kubenswrapper[4820]: I0221 06:58:23.697753 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:23 crc kubenswrapper[4820]: E0221 06:58:23.726449 4820 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(01436420e71c84eaf8b34ca0f5c12a30c169271298b617d7d707599313ca50e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 21 06:58:23 crc kubenswrapper[4820]: E0221 06:58:23.726543 4820 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(01436420e71c84eaf8b34ca0f5c12a30c169271298b617d7d707599313ca50e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:23 crc kubenswrapper[4820]: E0221 06:58:23.726570 4820 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(01436420e71c84eaf8b34ca0f5c12a30c169271298b617d7d707599313ca50e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:23 crc kubenswrapper[4820]: E0221 06:58:23.726645 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-p4pxl_crc-storage(3c764255-4b53-476b-ad40-4bd38c76f92c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-p4pxl_crc-storage(3c764255-4b53-476b-ad40-4bd38c76f92c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(01436420e71c84eaf8b34ca0f5c12a30c169271298b617d7d707599313ca50e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-p4pxl" podUID="3c764255-4b53-476b-ad40-4bd38c76f92c" Feb 21 06:58:29 crc kubenswrapper[4820]: I0221 06:58:29.696921 4820 scope.go:117] "RemoveContainer" containerID="03d0a6e2d37266d0266ccb9f72a6efebcd4bdac32c4b5bd8e9b6a73ba841b1e2" Feb 21 06:58:30 crc kubenswrapper[4820]: I0221 06:58:30.526585 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/2.log" Feb 21 06:58:30 crc kubenswrapper[4820]: I0221 06:58:30.526904 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerStarted","Data":"3d9b631313cf6fc11b87dd9d120ece7594f828813adbf98746fe417b673ae9ba"} Feb 21 06:58:34 crc kubenswrapper[4820]: I0221 06:58:34.319352 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:34 crc kubenswrapper[4820]: I0221 06:58:34.696359 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:34 crc kubenswrapper[4820]: I0221 06:58:34.697136 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:34 crc kubenswrapper[4820]: I0221 06:58:34.948194 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-p4pxl"] Feb 21 06:58:34 crc kubenswrapper[4820]: I0221 06:58:34.954613 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 06:58:35 crc kubenswrapper[4820]: I0221 06:58:35.558911 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-p4pxl" event={"ID":"3c764255-4b53-476b-ad40-4bd38c76f92c","Type":"ContainerStarted","Data":"d2546f2342679c6275e5e092254a7ea71f67352d551a33a3c38e63858eb43dfa"} Feb 21 06:58:37 crc kubenswrapper[4820]: I0221 06:58:37.571308 4820 generic.go:334] "Generic (PLEG): container finished" podID="3c764255-4b53-476b-ad40-4bd38c76f92c" containerID="edb2f0d9506d60a67187b5d382cfd1305f456f91506d3822d04d40dbb03ad374" exitCode=0 Feb 21 06:58:37 crc kubenswrapper[4820]: I0221 06:58:37.571394 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-p4pxl" event={"ID":"3c764255-4b53-476b-ad40-4bd38c76f92c","Type":"ContainerDied","Data":"edb2f0d9506d60a67187b5d382cfd1305f456f91506d3822d04d40dbb03ad374"} Feb 21 06:58:38 crc kubenswrapper[4820]: I0221 06:58:38.847598 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.034119 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3c764255-4b53-476b-ad40-4bd38c76f92c-node-mnt\") pod \"3c764255-4b53-476b-ad40-4bd38c76f92c\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.034173 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3c764255-4b53-476b-ad40-4bd38c76f92c-crc-storage\") pod \"3c764255-4b53-476b-ad40-4bd38c76f92c\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.034295 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9lqj\" (UniqueName: \"kubernetes.io/projected/3c764255-4b53-476b-ad40-4bd38c76f92c-kube-api-access-r9lqj\") pod \"3c764255-4b53-476b-ad40-4bd38c76f92c\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.034288 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c764255-4b53-476b-ad40-4bd38c76f92c-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "3c764255-4b53-476b-ad40-4bd38c76f92c" (UID: "3c764255-4b53-476b-ad40-4bd38c76f92c"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.041075 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c764255-4b53-476b-ad40-4bd38c76f92c-kube-api-access-r9lqj" (OuterVolumeSpecName: "kube-api-access-r9lqj") pod "3c764255-4b53-476b-ad40-4bd38c76f92c" (UID: "3c764255-4b53-476b-ad40-4bd38c76f92c"). InnerVolumeSpecName "kube-api-access-r9lqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.055957 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c764255-4b53-476b-ad40-4bd38c76f92c-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "3c764255-4b53-476b-ad40-4bd38c76f92c" (UID: "3c764255-4b53-476b-ad40-4bd38c76f92c"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.135495 4820 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3c764255-4b53-476b-ad40-4bd38c76f92c-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.135544 4820 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3c764255-4b53-476b-ad40-4bd38c76f92c-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.135565 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9lqj\" (UniqueName: \"kubernetes.io/projected/3c764255-4b53-476b-ad40-4bd38c76f92c-kube-api-access-r9lqj\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.587104 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-p4pxl" event={"ID":"3c764255-4b53-476b-ad40-4bd38c76f92c","Type":"ContainerDied","Data":"d2546f2342679c6275e5e092254a7ea71f67352d551a33a3c38e63858eb43dfa"} Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.587143 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2546f2342679c6275e5e092254a7ea71f67352d551a33a3c38e63858eb43dfa" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.587223 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:45 crc kubenswrapper[4820]: I0221 06:58:45.915755 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx"] Feb 21 06:58:45 crc kubenswrapper[4820]: E0221 06:58:45.917026 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c764255-4b53-476b-ad40-4bd38c76f92c" containerName="storage" Feb 21 06:58:45 crc kubenswrapper[4820]: I0221 06:58:45.917119 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c764255-4b53-476b-ad40-4bd38c76f92c" containerName="storage" Feb 21 06:58:45 crc kubenswrapper[4820]: I0221 06:58:45.917299 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c764255-4b53-476b-ad40-4bd38c76f92c" containerName="storage" Feb 21 06:58:45 crc kubenswrapper[4820]: I0221 06:58:45.918062 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:45 crc kubenswrapper[4820]: I0221 06:58:45.920700 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 21 06:58:45 crc kubenswrapper[4820]: I0221 06:58:45.936233 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx"] Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.020512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.020575 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxzr\" (UniqueName: \"kubernetes.io/projected/9e889767-aefe-4149-8677-fd116ae8d598-kube-api-access-dqxzr\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.020706 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.121773 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.121842 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqxzr\" (UniqueName: \"kubernetes.io/projected/9e889767-aefe-4149-8677-fd116ae8d598-kube-api-access-dqxzr\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.121920 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.122213 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.123407 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.141038 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqxzr\" (UniqueName: \"kubernetes.io/projected/9e889767-aefe-4149-8677-fd116ae8d598-kube-api-access-dqxzr\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.274071 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.450597 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx"] Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.628169 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" event={"ID":"9e889767-aefe-4149-8677-fd116ae8d598","Type":"ContainerStarted","Data":"a99ab81eb11638cf6c5a9282d9293b3d813474cdd0dc87949c6fb84e27a90094"} Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.628487 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" event={"ID":"9e889767-aefe-4149-8677-fd116ae8d598","Type":"ContainerStarted","Data":"1ac499dbdefe4bb5ee6d8b1f5574e58f12e93f7d2c75f7f28d0ddb0e456a0387"} Feb 21 06:58:47 crc kubenswrapper[4820]: I0221 06:58:47.635496 4820 generic.go:334] "Generic (PLEG): container finished" podID="9e889767-aefe-4149-8677-fd116ae8d598" containerID="a99ab81eb11638cf6c5a9282d9293b3d813474cdd0dc87949c6fb84e27a90094" exitCode=0 Feb 21 06:58:47 crc kubenswrapper[4820]: I0221 06:58:47.635557 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" event={"ID":"9e889767-aefe-4149-8677-fd116ae8d598","Type":"ContainerDied","Data":"a99ab81eb11638cf6c5a9282d9293b3d813474cdd0dc87949c6fb84e27a90094"} Feb 21 06:58:49 crc kubenswrapper[4820]: I0221 06:58:49.647789 4820 generic.go:334] "Generic (PLEG): container finished" podID="9e889767-aefe-4149-8677-fd116ae8d598" containerID="bc1a66108aaac39e7b2ac18192ce7070814f68091053332da91727692ceb8f51" exitCode=0 Feb 21 06:58:49 crc kubenswrapper[4820]: I0221 06:58:49.647853 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" event={"ID":"9e889767-aefe-4149-8677-fd116ae8d598","Type":"ContainerDied","Data":"bc1a66108aaac39e7b2ac18192ce7070814f68091053332da91727692ceb8f51"} Feb 21 06:58:50 crc kubenswrapper[4820]: I0221 06:58:50.658746 4820 generic.go:334] "Generic (PLEG): container finished" podID="9e889767-aefe-4149-8677-fd116ae8d598" containerID="af7bd2bf9bde375bf6eb49b1f68e0eb467edc8c3c3fc4d789d5c3f1e133b0407" exitCode=0 Feb 21 06:58:50 crc kubenswrapper[4820]: I0221 06:58:50.658848 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" event={"ID":"9e889767-aefe-4149-8677-fd116ae8d598","Type":"ContainerDied","Data":"af7bd2bf9bde375bf6eb49b1f68e0eb467edc8c3c3fc4d789d5c3f1e133b0407"} Feb 21 06:58:51 crc kubenswrapper[4820]: I0221 06:58:51.917114 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.089614 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqxzr\" (UniqueName: \"kubernetes.io/projected/9e889767-aefe-4149-8677-fd116ae8d598-kube-api-access-dqxzr\") pod \"9e889767-aefe-4149-8677-fd116ae8d598\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.089741 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-bundle\") pod \"9e889767-aefe-4149-8677-fd116ae8d598\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.089783 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-util\") pod \"9e889767-aefe-4149-8677-fd116ae8d598\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.090385 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-bundle" (OuterVolumeSpecName: "bundle") pod "9e889767-aefe-4149-8677-fd116ae8d598" (UID: "9e889767-aefe-4149-8677-fd116ae8d598"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.096082 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e889767-aefe-4149-8677-fd116ae8d598-kube-api-access-dqxzr" (OuterVolumeSpecName: "kube-api-access-dqxzr") pod "9e889767-aefe-4149-8677-fd116ae8d598" (UID: "9e889767-aefe-4149-8677-fd116ae8d598"). InnerVolumeSpecName "kube-api-access-dqxzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.191152 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.191198 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqxzr\" (UniqueName: \"kubernetes.io/projected/9e889767-aefe-4149-8677-fd116ae8d598-kube-api-access-dqxzr\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.277364 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-util" (OuterVolumeSpecName: "util") pod "9e889767-aefe-4149-8677-fd116ae8d598" (UID: "9e889767-aefe-4149-8677-fd116ae8d598"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.292633 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-util\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.672441 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" event={"ID":"9e889767-aefe-4149-8677-fd116ae8d598","Type":"ContainerDied","Data":"1ac499dbdefe4bb5ee6d8b1f5574e58f12e93f7d2c75f7f28d0ddb0e456a0387"} Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.672482 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.672486 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac499dbdefe4bb5ee6d8b1f5574e58f12e93f7d2c75f7f28d0ddb0e456a0387" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.591811 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-4rnft"] Feb 21 06:58:54 crc kubenswrapper[4820]: E0221 06:58:54.591994 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e889767-aefe-4149-8677-fd116ae8d598" containerName="util" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.592005 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e889767-aefe-4149-8677-fd116ae8d598" containerName="util" Feb 21 06:58:54 crc kubenswrapper[4820]: E0221 06:58:54.592012 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e889767-aefe-4149-8677-fd116ae8d598" containerName="pull" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.592018 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e889767-aefe-4149-8677-fd116ae8d598" containerName="pull" Feb 21 06:58:54 crc kubenswrapper[4820]: E0221 06:58:54.592036 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e889767-aefe-4149-8677-fd116ae8d598" containerName="extract" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.592042 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e889767-aefe-4149-8677-fd116ae8d598" containerName="extract" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.592127 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e889767-aefe-4149-8677-fd116ae8d598" containerName="extract" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.592464 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.603482 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.604109 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.603859 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-985r8" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.609946 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-4rnft"] Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.757071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4lj2\" (UniqueName: \"kubernetes.io/projected/375887b5-9d2e-4af8-9128-789ebd290f97-kube-api-access-l4lj2\") pod \"nmstate-operator-694c9596b7-4rnft\" (UID: \"375887b5-9d2e-4af8-9128-789ebd290f97\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.858775 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4lj2\" (UniqueName: \"kubernetes.io/projected/375887b5-9d2e-4af8-9128-789ebd290f97-kube-api-access-l4lj2\") pod \"nmstate-operator-694c9596b7-4rnft\" (UID: \"375887b5-9d2e-4af8-9128-789ebd290f97\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.875170 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4lj2\" (UniqueName: \"kubernetes.io/projected/375887b5-9d2e-4af8-9128-789ebd290f97-kube-api-access-l4lj2\") pod \"nmstate-operator-694c9596b7-4rnft\" (UID: \"375887b5-9d2e-4af8-9128-789ebd290f97\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.964530 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" Feb 21 06:58:55 crc kubenswrapper[4820]: I0221 06:58:55.129948 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-4rnft"] Feb 21 06:58:55 crc kubenswrapper[4820]: I0221 06:58:55.692498 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" event={"ID":"375887b5-9d2e-4af8-9128-789ebd290f97","Type":"ContainerStarted","Data":"e3c8550742b9818c3c05d7bfdcb5bfabbbc87b0aaa55800b5253bb606c1038c5"} Feb 21 06:58:57 crc kubenswrapper[4820]: I0221 06:58:57.707796 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" event={"ID":"375887b5-9d2e-4af8-9128-789ebd290f97","Type":"ContainerStarted","Data":"94b9282885a29e1e65024a337441fe18a0cf000686ec361502c2bac2b70f200a"} Feb 21 06:58:57 crc kubenswrapper[4820]: I0221 06:58:57.733221 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" podStartSLOduration=1.6844870410000001 podStartE2EDuration="3.733192858s" podCreationTimestamp="2026-02-21 06:58:54 +0000 UTC" firstStartedPulling="2026-02-21 06:58:55.142903064 +0000 UTC m=+710.175987262" lastFinishedPulling="2026-02-21 06:58:57.191608881 +0000 UTC m=+712.224693079" observedRunningTime="2026-02-21 06:58:57.732981993 +0000 UTC m=+712.766066201" watchObservedRunningTime="2026-02-21 06:58:57.733192858 +0000 UTC m=+712.766277116" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.659123 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-m6svj"] Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.659940 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.666450 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-c7pvt" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.678232 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-m6svj"] Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.681277 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp"] Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.682069 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.683792 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.692039 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tz942"] Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.692683 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.698987 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp"] Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.819113 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2"] Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.820005 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.821731 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.822220 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.822415 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-wlznm" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.827950 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/62b9a00a-9b7e-4057-bc85-2a16c48957f4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-c8gmp\" (UID: \"62b9a00a-9b7e-4057-bc85-2a16c48957f4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.828011 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqtlh\" (UniqueName: \"kubernetes.io/projected/b7930d8a-8ded-4552-9c0a-aa73fa2006e2-kube-api-access-jqtlh\") pod \"nmstate-metrics-58c85c668d-m6svj\" (UID: \"b7930d8a-8ded-4552-9c0a-aa73fa2006e2\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.828039 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ctf6\" (UniqueName: \"kubernetes.io/projected/62b9a00a-9b7e-4057-bc85-2a16c48957f4-kube-api-access-9ctf6\") pod \"nmstate-webhook-866bcb46dc-c8gmp\" (UID: \"62b9a00a-9b7e-4057-bc85-2a16c48957f4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.828069 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-ovs-socket\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.829110 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-dbus-socket\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.829157 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-nmstate-lock\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.829194 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zv7f\" (UniqueName: \"kubernetes.io/projected/a6c76731-bd23-43eb-84f6-84d675965035-kube-api-access-2zv7f\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.833010 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2"] Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930423 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/15902f84-d2f7-42a0-929e-89c21cffddd8-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl8zv\" (UniqueName: \"kubernetes.io/projected/15902f84-d2f7-42a0-929e-89c21cffddd8-kube-api-access-wl8zv\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930538 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/62b9a00a-9b7e-4057-bc85-2a16c48957f4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-c8gmp\" (UID: \"62b9a00a-9b7e-4057-bc85-2a16c48957f4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930571 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqtlh\" (UniqueName: \"kubernetes.io/projected/b7930d8a-8ded-4552-9c0a-aa73fa2006e2-kube-api-access-jqtlh\") pod \"nmstate-metrics-58c85c668d-m6svj\" (UID: \"b7930d8a-8ded-4552-9c0a-aa73fa2006e2\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930596 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/15902f84-d2f7-42a0-929e-89c21cffddd8-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:58 crc kubenswrapper[4820]: E0221 06:58:58.930707 4820 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930741 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ctf6\" (UniqueName: \"kubernetes.io/projected/62b9a00a-9b7e-4057-bc85-2a16c48957f4-kube-api-access-9ctf6\") pod \"nmstate-webhook-866bcb46dc-c8gmp\" (UID: \"62b9a00a-9b7e-4057-bc85-2a16c48957f4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:58 crc kubenswrapper[4820]: E0221 06:58:58.930780 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62b9a00a-9b7e-4057-bc85-2a16c48957f4-tls-key-pair podName:62b9a00a-9b7e-4057-bc85-2a16c48957f4 nodeName:}" failed. No retries permitted until 2026-02-21 06:58:59.430760124 +0000 UTC m=+714.463844322 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/62b9a00a-9b7e-4057-bc85-2a16c48957f4-tls-key-pair") pod "nmstate-webhook-866bcb46dc-c8gmp" (UID: "62b9a00a-9b7e-4057-bc85-2a16c48957f4") : secret "openshift-nmstate-webhook" not found Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930808 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-ovs-socket\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930874 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-dbus-socket\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930914 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-ovs-socket\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930922 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-nmstate-lock\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930984 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zv7f\" (UniqueName: \"kubernetes.io/projected/a6c76731-bd23-43eb-84f6-84d675965035-kube-api-access-2zv7f\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.931020 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-nmstate-lock\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.931173 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-dbus-socket\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.952429 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqtlh\" (UniqueName: \"kubernetes.io/projected/b7930d8a-8ded-4552-9c0a-aa73fa2006e2-kube-api-access-jqtlh\") pod \"nmstate-metrics-58c85c668d-m6svj\" (UID: \"b7930d8a-8ded-4552-9c0a-aa73fa2006e2\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.959876 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zv7f\" (UniqueName: \"kubernetes.io/projected/a6c76731-bd23-43eb-84f6-84d675965035-kube-api-access-2zv7f\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.964856 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ctf6\" (UniqueName: \"kubernetes.io/projected/62b9a00a-9b7e-4057-bc85-2a16c48957f4-kube-api-access-9ctf6\") pod \"nmstate-webhook-866bcb46dc-c8gmp\" (UID: \"62b9a00a-9b7e-4057-bc85-2a16c48957f4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.974114 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.026049 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5698ddd759-6nvxq"] Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.026702 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.032073 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/15902f84-d2f7-42a0-929e-89c21cffddd8-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.032107 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl8zv\" (UniqueName: \"kubernetes.io/projected/15902f84-d2f7-42a0-929e-89c21cffddd8-kube-api-access-wl8zv\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.032150 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/15902f84-d2f7-42a0-929e-89c21cffddd8-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: E0221 06:58:59.032175 4820 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 21 06:58:59 crc kubenswrapper[4820]: E0221 06:58:59.032228 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15902f84-d2f7-42a0-929e-89c21cffddd8-plugin-serving-cert podName:15902f84-d2f7-42a0-929e-89c21cffddd8 nodeName:}" failed. No retries permitted until 2026-02-21 06:58:59.532212907 +0000 UTC m=+714.565297105 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/15902f84-d2f7-42a0-929e-89c21cffddd8-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-b5kf2" (UID: "15902f84-d2f7-42a0-929e-89c21cffddd8") : secret "plugin-serving-cert" not found Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.032955 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/15902f84-d2f7-42a0-929e-89c21cffddd8-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.062443 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.066482 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5698ddd759-6nvxq"] Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.081038 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl8zv\" (UniqueName: \"kubernetes.io/projected/15902f84-d2f7-42a0-929e-89c21cffddd8-kube-api-access-wl8zv\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.134134 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-service-ca\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.134204 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wltt\" (UniqueName: \"kubernetes.io/projected/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-kube-api-access-6wltt\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.134226 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-oauth-config\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.134262 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-oauth-serving-cert\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.134291 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-config\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.134311 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-serving-cert\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.134325 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-trusted-ca-bundle\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.234995 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-config\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.235327 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-serving-cert\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.235344 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-trusted-ca-bundle\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.235387 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-service-ca\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.235427 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wltt\" (UniqueName: \"kubernetes.io/projected/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-kube-api-access-6wltt\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.235450 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-oauth-config\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.235474 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-oauth-serving-cert\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.236254 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-oauth-serving-cert\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.239922 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-service-ca\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.240396 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-trusted-ca-bundle\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.240665 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-config\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.241140 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-oauth-config\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.242280 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-serving-cert\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.250539 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wltt\" (UniqueName: \"kubernetes.io/projected/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-kube-api-access-6wltt\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.277691 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-m6svj"] Feb 21 06:58:59 crc kubenswrapper[4820]: W0221 06:58:59.281830 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7930d8a_8ded_4552_9c0a_aa73fa2006e2.slice/crio-264b630e8e93fb7f96b0795d4f0ac2369d905121f4316798b979f5b108de7e38 WatchSource:0}: Error finding container 264b630e8e93fb7f96b0795d4f0ac2369d905121f4316798b979f5b108de7e38: Status 404 returned error can't find the container with id 264b630e8e93fb7f96b0795d4f0ac2369d905121f4316798b979f5b108de7e38 Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.339211 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.439992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/62b9a00a-9b7e-4057-bc85-2a16c48957f4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-c8gmp\" (UID: \"62b9a00a-9b7e-4057-bc85-2a16c48957f4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.443223 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/62b9a00a-9b7e-4057-bc85-2a16c48957f4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-c8gmp\" (UID: \"62b9a00a-9b7e-4057-bc85-2a16c48957f4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.485706 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5698ddd759-6nvxq"] Feb 21 06:58:59 crc kubenswrapper[4820]: W0221 06:58:59.490669 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7df248d1_5aca_4b9d_97f5_9e4ff67ef219.slice/crio-ca96de8ac3f2ba46ff645484b5a0c61b515c401ce517948f9f24c24b4f4dddaf WatchSource:0}: Error finding container ca96de8ac3f2ba46ff645484b5a0c61b515c401ce517948f9f24c24b4f4dddaf: Status 404 returned error can't find the container with id ca96de8ac3f2ba46ff645484b5a0c61b515c401ce517948f9f24c24b4f4dddaf Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.540920 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/15902f84-d2f7-42a0-929e-89c21cffddd8-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.543747 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/15902f84-d2f7-42a0-929e-89c21cffddd8-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.649693 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.735553 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.745537 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5698ddd759-6nvxq" event={"ID":"7df248d1-5aca-4b9d-97f5-9e4ff67ef219","Type":"ContainerStarted","Data":"f7081e9cdff96c1c1af93c0e1c2819be80f9df08689fdde3d6f3185dc62ab219"} Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.745575 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5698ddd759-6nvxq" event={"ID":"7df248d1-5aca-4b9d-97f5-9e4ff67ef219","Type":"ContainerStarted","Data":"ca96de8ac3f2ba46ff645484b5a0c61b515c401ce517948f9f24c24b4f4dddaf"} Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.748300 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" event={"ID":"b7930d8a-8ded-4552-9c0a-aa73fa2006e2","Type":"ContainerStarted","Data":"264b630e8e93fb7f96b0795d4f0ac2369d905121f4316798b979f5b108de7e38"} Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.749229 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tz942" event={"ID":"a6c76731-bd23-43eb-84f6-84d675965035","Type":"ContainerStarted","Data":"22adcde67782a676db42dc3bb6263f558f61749b7265350ca96293b7d510cc39"} Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.767273 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5698ddd759-6nvxq" podStartSLOduration=1.767254103 podStartE2EDuration="1.767254103s" podCreationTimestamp="2026-02-21 06:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:58:59.764768505 +0000 UTC m=+714.797852703" watchObservedRunningTime="2026-02-21 06:58:59.767254103 +0000 UTC m=+714.800338301" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.902553 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp"] Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.976084 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2"] Feb 21 06:59:00 crc kubenswrapper[4820]: I0221 06:59:00.756279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" event={"ID":"62b9a00a-9b7e-4057-bc85-2a16c48957f4","Type":"ContainerStarted","Data":"0961319806db638be7dacbf7d5df3428b961486cc05353c81b9ba146c2afcdbf"} Feb 21 06:59:00 crc kubenswrapper[4820]: I0221 06:59:00.757334 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" event={"ID":"15902f84-d2f7-42a0-929e-89c21cffddd8","Type":"ContainerStarted","Data":"bb7f418cf5ab07e0d8ed6fafd7c7ee48ae84db1961f292dffa72ab01cf1da892"} Feb 21 06:59:01 crc kubenswrapper[4820]: I0221 06:59:01.763469 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" event={"ID":"b7930d8a-8ded-4552-9c0a-aa73fa2006e2","Type":"ContainerStarted","Data":"126c5081a29b742b19beb4d9030d81abb61fa053a1711c3a958c64c1addbcca9"} Feb 21 06:59:01 crc kubenswrapper[4820]: I0221 06:59:01.766024 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tz942" event={"ID":"a6c76731-bd23-43eb-84f6-84d675965035","Type":"ContainerStarted","Data":"f847d0eb4586d68de909563cb431d9be88e7983b82d431e38e1e474023ec961c"} Feb 21 06:59:01 crc kubenswrapper[4820]: I0221 06:59:01.766175 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:59:01 crc kubenswrapper[4820]: I0221 06:59:01.768805 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" event={"ID":"62b9a00a-9b7e-4057-bc85-2a16c48957f4","Type":"ContainerStarted","Data":"e3cfb83da897c6e4c413869ade397ade6c24e72116d1b515b38491d7ae8476e1"} Feb 21 06:59:01 crc kubenswrapper[4820]: I0221 06:59:01.768943 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:59:01 crc kubenswrapper[4820]: I0221 06:59:01.779626 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tz942" podStartSLOduration=1.443736558 podStartE2EDuration="3.779610685s" podCreationTimestamp="2026-02-21 06:58:58 +0000 UTC" firstStartedPulling="2026-02-21 06:58:59.118104065 +0000 UTC m=+714.151188273" lastFinishedPulling="2026-02-21 06:59:01.453978202 +0000 UTC m=+716.487062400" observedRunningTime="2026-02-21 06:59:01.777550428 +0000 UTC m=+716.810634636" watchObservedRunningTime="2026-02-21 06:59:01.779610685 +0000 UTC m=+716.812694883" Feb 21 06:59:02 crc kubenswrapper[4820]: I0221 06:59:02.778585 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" event={"ID":"15902f84-d2f7-42a0-929e-89c21cffddd8","Type":"ContainerStarted","Data":"11b0f0d8897db0b87de016191e55d070318bf0c4ff5531dbbbdf2cd6ce9dd341"} Feb 21 06:59:02 crc kubenswrapper[4820]: I0221 06:59:02.795916 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" podStartSLOduration=2.436071996 podStartE2EDuration="4.795892524s" podCreationTimestamp="2026-02-21 06:58:58 +0000 UTC" firstStartedPulling="2026-02-21 06:58:59.990218333 +0000 UTC m=+715.023302531" lastFinishedPulling="2026-02-21 06:59:02.350038871 +0000 UTC m=+717.383123059" observedRunningTime="2026-02-21 06:59:02.789695531 +0000 UTC m=+717.822779739" watchObservedRunningTime="2026-02-21 06:59:02.795892524 +0000 UTC m=+717.828976722" Feb 21 06:59:02 crc kubenswrapper[4820]: I0221 06:59:02.796805 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" podStartSLOduration=3.253596514 podStartE2EDuration="4.79679664s" podCreationTimestamp="2026-02-21 06:58:58 +0000 UTC" firstStartedPulling="2026-02-21 06:58:59.912417492 +0000 UTC m=+714.945501690" lastFinishedPulling="2026-02-21 06:59:01.455617588 +0000 UTC m=+716.488701816" observedRunningTime="2026-02-21 06:59:01.813610506 +0000 UTC m=+716.846694694" watchObservedRunningTime="2026-02-21 06:59:02.79679664 +0000 UTC m=+717.829880838" Feb 21 06:59:03 crc kubenswrapper[4820]: I0221 06:59:03.784325 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" event={"ID":"b7930d8a-8ded-4552-9c0a-aa73fa2006e2","Type":"ContainerStarted","Data":"0fdb247c12bc64fa564d3e792e63e1b7d5d8fa0745aaa9536eeea87648423351"} Feb 21 06:59:09 crc kubenswrapper[4820]: I0221 06:59:09.095970 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:59:09 crc kubenswrapper[4820]: I0221 06:59:09.114879 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" podStartSLOduration=6.850332706 podStartE2EDuration="11.114860345s" podCreationTimestamp="2026-02-21 06:58:58 +0000 UTC" firstStartedPulling="2026-02-21 06:58:59.283851981 +0000 UTC m=+714.316936179" lastFinishedPulling="2026-02-21 06:59:03.5483796 +0000 UTC m=+718.581463818" observedRunningTime="2026-02-21 06:59:03.805059565 +0000 UTC m=+718.838143833" watchObservedRunningTime="2026-02-21 06:59:09.114860345 +0000 UTC m=+724.147944553" Feb 21 06:59:09 crc kubenswrapper[4820]: I0221 06:59:09.340262 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:59:09 crc kubenswrapper[4820]: I0221 06:59:09.340360 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:59:09 crc kubenswrapper[4820]: I0221 06:59:09.346519 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:59:09 crc kubenswrapper[4820]: I0221 06:59:09.828935 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:59:09 crc kubenswrapper[4820]: I0221 06:59:09.887172 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cgbzf"] Feb 21 06:59:19 crc kubenswrapper[4820]: I0221 06:59:19.656987 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.094857 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp"] Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.096840 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.099428 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.101055 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp"] Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.236667 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mnqh\" (UniqueName: \"kubernetes.io/projected/2e4047bc-d968-4163-82f1-13cecd18893e-kube-api-access-9mnqh\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.236792 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.236835 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.338306 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.338414 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.338483 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mnqh\" (UniqueName: \"kubernetes.io/projected/2e4047bc-d968-4163-82f1-13cecd18893e-kube-api-access-9mnqh\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.338876 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.338931 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.365492 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mnqh\" (UniqueName: \"kubernetes.io/projected/2e4047bc-d968-4163-82f1-13cecd18893e-kube-api-access-9mnqh\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.417058 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.628582 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp"] Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.937895 4820 generic.go:334] "Generic (PLEG): container finished" podID="2e4047bc-d968-4163-82f1-13cecd18893e" containerID="894b2292bfb37d901f11037b3cc99f84ebbea287d26c0748bf1681cd71062222" exitCode=0 Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.937948 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" event={"ID":"2e4047bc-d968-4163-82f1-13cecd18893e","Type":"ContainerDied","Data":"894b2292bfb37d901f11037b3cc99f84ebbea287d26c0748bf1681cd71062222"} Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.938228 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" event={"ID":"2e4047bc-d968-4163-82f1-13cecd18893e","Type":"ContainerStarted","Data":"574f780f0fa36ae610634245afd7290ae3dc549b1d7c077b4743d15e1603aa06"} Feb 21 06:59:33 crc kubenswrapper[4820]: I0221 06:59:33.951215 4820 generic.go:334] "Generic (PLEG): container finished" podID="2e4047bc-d968-4163-82f1-13cecd18893e" containerID="a2cfb57cc71230dd50799736f8b501f5fc48e2d0c48bbc9223d0d47bc57ab51c" exitCode=0 Feb 21 06:59:33 crc kubenswrapper[4820]: I0221 06:59:33.951573 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" event={"ID":"2e4047bc-d968-4163-82f1-13cecd18893e","Type":"ContainerDied","Data":"a2cfb57cc71230dd50799736f8b501f5fc48e2d0c48bbc9223d0d47bc57ab51c"} Feb 21 06:59:34 crc kubenswrapper[4820]: I0221 06:59:34.940307 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-cgbzf" podUID="18b46a58-b11c-4760-bd38-1c875c4ecf21" containerName="console" containerID="cri-o://abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25" gracePeriod=15 Feb 21 06:59:34 crc kubenswrapper[4820]: I0221 06:59:34.965735 4820 generic.go:334] "Generic (PLEG): container finished" podID="2e4047bc-d968-4163-82f1-13cecd18893e" containerID="a8bb11fad7ebef1399ca38d060a61becc5101d9b7e120c9d16f0a9f0880826af" exitCode=0 Feb 21 06:59:34 crc kubenswrapper[4820]: I0221 06:59:34.965778 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" event={"ID":"2e4047bc-d968-4163-82f1-13cecd18893e","Type":"ContainerDied","Data":"a8bb11fad7ebef1399ca38d060a61becc5101d9b7e120c9d16f0a9f0880826af"} Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.265979 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cgbzf_18b46a58-b11c-4760-bd38-1c875c4ecf21/console/0.log" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.266285 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387231 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-oauth-serving-cert\") pod \"18b46a58-b11c-4760-bd38-1c875c4ecf21\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387306 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-serving-cert\") pod \"18b46a58-b11c-4760-bd38-1c875c4ecf21\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387359 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-config\") pod \"18b46a58-b11c-4760-bd38-1c875c4ecf21\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387388 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-oauth-config\") pod \"18b46a58-b11c-4760-bd38-1c875c4ecf21\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387420 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-service-ca\") pod \"18b46a58-b11c-4760-bd38-1c875c4ecf21\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387450 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh87r\" (UniqueName: \"kubernetes.io/projected/18b46a58-b11c-4760-bd38-1c875c4ecf21-kube-api-access-xh87r\") pod \"18b46a58-b11c-4760-bd38-1c875c4ecf21\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387469 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-trusted-ca-bundle\") pod \"18b46a58-b11c-4760-bd38-1c875c4ecf21\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387926 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "18b46a58-b11c-4760-bd38-1c875c4ecf21" (UID: "18b46a58-b11c-4760-bd38-1c875c4ecf21"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.388101 4820 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.388309 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-config" (OuterVolumeSpecName: "console-config") pod "18b46a58-b11c-4760-bd38-1c875c4ecf21" (UID: "18b46a58-b11c-4760-bd38-1c875c4ecf21"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.388365 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "18b46a58-b11c-4760-bd38-1c875c4ecf21" (UID: "18b46a58-b11c-4760-bd38-1c875c4ecf21"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.388429 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-service-ca" (OuterVolumeSpecName: "service-ca") pod "18b46a58-b11c-4760-bd38-1c875c4ecf21" (UID: "18b46a58-b11c-4760-bd38-1c875c4ecf21"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.392734 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "18b46a58-b11c-4760-bd38-1c875c4ecf21" (UID: "18b46a58-b11c-4760-bd38-1c875c4ecf21"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.392969 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "18b46a58-b11c-4760-bd38-1c875c4ecf21" (UID: "18b46a58-b11c-4760-bd38-1c875c4ecf21"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.393007 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b46a58-b11c-4760-bd38-1c875c4ecf21-kube-api-access-xh87r" (OuterVolumeSpecName: "kube-api-access-xh87r") pod "18b46a58-b11c-4760-bd38-1c875c4ecf21" (UID: "18b46a58-b11c-4760-bd38-1c875c4ecf21"). InnerVolumeSpecName "kube-api-access-xh87r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.489372 4820 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.489398 4820 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.489406 4820 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.489414 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.489422 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh87r\" (UniqueName: \"kubernetes.io/projected/18b46a58-b11c-4760-bd38-1c875c4ecf21-kube-api-access-xh87r\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.489433 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.971793 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cgbzf_18b46a58-b11c-4760-bd38-1c875c4ecf21/console/0.log" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.971847 4820 generic.go:334] "Generic (PLEG): container finished" podID="18b46a58-b11c-4760-bd38-1c875c4ecf21" containerID="abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25" exitCode=2 Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.971936 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.971944 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cgbzf" event={"ID":"18b46a58-b11c-4760-bd38-1c875c4ecf21","Type":"ContainerDied","Data":"abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25"} Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.972010 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cgbzf" event={"ID":"18b46a58-b11c-4760-bd38-1c875c4ecf21","Type":"ContainerDied","Data":"0767d187d2981c7d5f1c668b318887301f7e5326b2d0aaa6f0c17cc8530104d7"} Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.972038 4820 scope.go:117] "RemoveContainer" containerID="abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.994633 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cgbzf"] Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.996614 4820 scope.go:117] "RemoveContainer" containerID="abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25" Feb 21 06:59:35 crc kubenswrapper[4820]: E0221 06:59:35.997078 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25\": container with ID starting with abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25 not found: ID does not exist" containerID="abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.997220 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25"} err="failed to get container status \"abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25\": rpc error: code = NotFound desc = could not find container \"abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25\": container with ID starting with abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25 not found: ID does not exist" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.001750 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-cgbzf"] Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.219595 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.399198 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-util\") pod \"2e4047bc-d968-4163-82f1-13cecd18893e\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.399459 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-bundle\") pod \"2e4047bc-d968-4163-82f1-13cecd18893e\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.399568 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mnqh\" (UniqueName: \"kubernetes.io/projected/2e4047bc-d968-4163-82f1-13cecd18893e-kube-api-access-9mnqh\") pod \"2e4047bc-d968-4163-82f1-13cecd18893e\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.401277 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-bundle" (OuterVolumeSpecName: "bundle") pod "2e4047bc-d968-4163-82f1-13cecd18893e" (UID: "2e4047bc-d968-4163-82f1-13cecd18893e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.403345 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4047bc-d968-4163-82f1-13cecd18893e-kube-api-access-9mnqh" (OuterVolumeSpecName: "kube-api-access-9mnqh") pod "2e4047bc-d968-4163-82f1-13cecd18893e" (UID: "2e4047bc-d968-4163-82f1-13cecd18893e"). InnerVolumeSpecName "kube-api-access-9mnqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.419310 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-util" (OuterVolumeSpecName: "util") pod "2e4047bc-d968-4163-82f1-13cecd18893e" (UID: "2e4047bc-d968-4163-82f1-13cecd18893e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.501535 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-util\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.501584 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.501594 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mnqh\" (UniqueName: \"kubernetes.io/projected/2e4047bc-d968-4163-82f1-13cecd18893e-kube-api-access-9mnqh\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.987074 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" event={"ID":"2e4047bc-d968-4163-82f1-13cecd18893e","Type":"ContainerDied","Data":"574f780f0fa36ae610634245afd7290ae3dc549b1d7c077b4743d15e1603aa06"} Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.987107 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="574f780f0fa36ae610634245afd7290ae3dc549b1d7c077b4743d15e1603aa06" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.987151 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:37 crc kubenswrapper[4820]: I0221 06:59:37.701884 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b46a58-b11c-4760-bd38-1c875c4ecf21" path="/var/lib/kubelet/pods/18b46a58-b11c-4760-bd38-1c875c4ecf21/volumes" Feb 21 06:59:41 crc kubenswrapper[4820]: I0221 06:59:41.729397 4820 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.286258 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl"] Feb 21 06:59:45 crc kubenswrapper[4820]: E0221 06:59:45.286802 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b46a58-b11c-4760-bd38-1c875c4ecf21" containerName="console" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.286817 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b46a58-b11c-4760-bd38-1c875c4ecf21" containerName="console" Feb 21 06:59:45 crc kubenswrapper[4820]: E0221 06:59:45.286832 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4047bc-d968-4163-82f1-13cecd18893e" containerName="pull" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.286839 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4047bc-d968-4163-82f1-13cecd18893e" containerName="pull" Feb 21 06:59:45 crc kubenswrapper[4820]: E0221 06:59:45.286849 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4047bc-d968-4163-82f1-13cecd18893e" containerName="extract" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.286858 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4047bc-d968-4163-82f1-13cecd18893e" containerName="extract" Feb 21 06:59:45 crc kubenswrapper[4820]: E0221 06:59:45.286879 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4047bc-d968-4163-82f1-13cecd18893e" containerName="util" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.286887 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4047bc-d968-4163-82f1-13cecd18893e" containerName="util" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.287011 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4047bc-d968-4163-82f1-13cecd18893e" containerName="extract" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.287026 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b46a58-b11c-4760-bd38-1c875c4ecf21" containerName="console" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.287582 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.288968 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.290354 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.290581 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.291207 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-5b4d4" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.292719 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.299853 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl"] Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.422955 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-apiservice-cert\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.423067 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfb59\" (UniqueName: \"kubernetes.io/projected/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-kube-api-access-sfb59\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.423194 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-webhook-cert\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.502622 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-c68698666-cvwrd"] Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.503353 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.505155 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.505350 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.505782 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8bptw" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.516702 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c68698666-cvwrd"] Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.524023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-webhook-cert\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.524075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-apiservice-cert\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.524095 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfb59\" (UniqueName: \"kubernetes.io/projected/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-kube-api-access-sfb59\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.529958 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-webhook-cert\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.529973 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-apiservice-cert\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.544102 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfb59\" (UniqueName: \"kubernetes.io/projected/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-kube-api-access-sfb59\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.605093 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.624918 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwdm8\" (UniqueName: \"kubernetes.io/projected/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-kube-api-access-rwdm8\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.624969 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-webhook-cert\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.624997 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-apiservice-cert\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.728571 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwdm8\" (UniqueName: \"kubernetes.io/projected/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-kube-api-access-rwdm8\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.728661 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-webhook-cert\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.728699 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-apiservice-cert\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.745341 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-apiservice-cert\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.748317 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-webhook-cert\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.754659 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwdm8\" (UniqueName: \"kubernetes.io/projected/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-kube-api-access-rwdm8\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.816875 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.827394 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl"] Feb 21 06:59:45 crc kubenswrapper[4820]: W0221 06:59:45.837703 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc44a7fe_6bdf_4d85_a2aa_aeafa3d1d74d.slice/crio-8c3850f244d7a2bf9a5aa030cbe62d857a5c99da1b202a398ef445da782505d3 WatchSource:0}: Error finding container 8c3850f244d7a2bf9a5aa030cbe62d857a5c99da1b202a398ef445da782505d3: Status 404 returned error can't find the container with id 8c3850f244d7a2bf9a5aa030cbe62d857a5c99da1b202a398ef445da782505d3 Feb 21 06:59:46 crc kubenswrapper[4820]: I0221 06:59:46.025056 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c68698666-cvwrd"] Feb 21 06:59:46 crc kubenswrapper[4820]: W0221 06:59:46.030770 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode17110d4_51ce_4fca_a5e7_ba4eedeb42a8.slice/crio-ed133935f4713c12cd8565aec958fe1b8559777bb7147d14c65860cf34d1baec WatchSource:0}: Error finding container ed133935f4713c12cd8565aec958fe1b8559777bb7147d14c65860cf34d1baec: Status 404 returned error can't find the container with id ed133935f4713c12cd8565aec958fe1b8559777bb7147d14c65860cf34d1baec Feb 21 06:59:46 crc kubenswrapper[4820]: I0221 06:59:46.031869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" event={"ID":"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d","Type":"ContainerStarted","Data":"8c3850f244d7a2bf9a5aa030cbe62d857a5c99da1b202a398ef445da782505d3"} Feb 21 06:59:47 crc kubenswrapper[4820]: I0221 06:59:47.046599 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" event={"ID":"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8","Type":"ContainerStarted","Data":"ed133935f4713c12cd8565aec958fe1b8559777bb7147d14c65860cf34d1baec"} Feb 21 06:59:49 crc kubenswrapper[4820]: I0221 06:59:49.059163 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" event={"ID":"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d","Type":"ContainerStarted","Data":"5d2b4cef7150794e3a2371d89dc9e308d2184074f06af2a1ae6bae14a39ad714"} Feb 21 06:59:49 crc kubenswrapper[4820]: I0221 06:59:49.059673 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:49 crc kubenswrapper[4820]: I0221 06:59:49.084353 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" podStartSLOduration=1.3092193189999999 podStartE2EDuration="4.084333108s" podCreationTimestamp="2026-02-21 06:59:45 +0000 UTC" firstStartedPulling="2026-02-21 06:59:45.839557846 +0000 UTC m=+760.872642044" lastFinishedPulling="2026-02-21 06:59:48.614671635 +0000 UTC m=+763.647755833" observedRunningTime="2026-02-21 06:59:49.076056951 +0000 UTC m=+764.109141150" watchObservedRunningTime="2026-02-21 06:59:49.084333108 +0000 UTC m=+764.117417326" Feb 21 06:59:51 crc kubenswrapper[4820]: I0221 06:59:51.075870 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" event={"ID":"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8","Type":"ContainerStarted","Data":"6cfd838986ea6b549ace2ad3be7626e9a02cc705e7d23111040c6004fa7c8f36"} Feb 21 06:59:51 crc kubenswrapper[4820]: I0221 06:59:51.076262 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.191458 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" podStartSLOduration=11.286517671 podStartE2EDuration="15.191440576s" podCreationTimestamp="2026-02-21 06:59:45 +0000 UTC" firstStartedPulling="2026-02-21 06:59:46.035750999 +0000 UTC m=+761.068835197" lastFinishedPulling="2026-02-21 06:59:49.940673904 +0000 UTC m=+764.973758102" observedRunningTime="2026-02-21 06:59:51.100294395 +0000 UTC m=+766.133378593" watchObservedRunningTime="2026-02-21 07:00:00.191440576 +0000 UTC m=+775.224524774" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.196954 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn"] Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.197754 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.201124 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.202215 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.220218 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn"] Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.303191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54597218-e332-4423-adc0-b4be2977a4ce-secret-volume\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.303414 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcbql\" (UniqueName: \"kubernetes.io/projected/54597218-e332-4423-adc0-b4be2977a4ce-kube-api-access-tcbql\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.303474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54597218-e332-4423-adc0-b4be2977a4ce-config-volume\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.405333 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54597218-e332-4423-adc0-b4be2977a4ce-secret-volume\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.405410 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcbql\" (UniqueName: \"kubernetes.io/projected/54597218-e332-4423-adc0-b4be2977a4ce-kube-api-access-tcbql\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.405438 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54597218-e332-4423-adc0-b4be2977a4ce-config-volume\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.406438 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54597218-e332-4423-adc0-b4be2977a4ce-config-volume\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.412184 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54597218-e332-4423-adc0-b4be2977a4ce-secret-volume\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.432993 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcbql\" (UniqueName: \"kubernetes.io/projected/54597218-e332-4423-adc0-b4be2977a4ce-kube-api-access-tcbql\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.513543 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.906831 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn"] Feb 21 07:00:01 crc kubenswrapper[4820]: I0221 07:00:01.136292 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" event={"ID":"54597218-e332-4423-adc0-b4be2977a4ce","Type":"ContainerStarted","Data":"5520f9baaf36da34f01d9939d3174e22d3ad84830852ce6d62998744f623b758"} Feb 21 07:00:01 crc kubenswrapper[4820]: I0221 07:00:01.136601 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" event={"ID":"54597218-e332-4423-adc0-b4be2977a4ce","Type":"ContainerStarted","Data":"7ccb949d706acc9f1588c7ac95f689c87c2049502040f21e82a12613a5dc82d9"} Feb 21 07:00:01 crc kubenswrapper[4820]: I0221 07:00:01.152776 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" podStartSLOduration=1.152761065 podStartE2EDuration="1.152761065s" podCreationTimestamp="2026-02-21 07:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:00:01.150648758 +0000 UTC m=+776.183732956" watchObservedRunningTime="2026-02-21 07:00:01.152761065 +0000 UTC m=+776.185845263" Feb 21 07:00:02 crc kubenswrapper[4820]: I0221 07:00:02.143501 4820 generic.go:334] "Generic (PLEG): container finished" podID="54597218-e332-4423-adc0-b4be2977a4ce" containerID="5520f9baaf36da34f01d9939d3174e22d3ad84830852ce6d62998744f623b758" exitCode=0 Feb 21 07:00:02 crc kubenswrapper[4820]: I0221 07:00:02.143549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" event={"ID":"54597218-e332-4423-adc0-b4be2977a4ce","Type":"ContainerDied","Data":"5520f9baaf36da34f01d9939d3174e22d3ad84830852ce6d62998744f623b758"} Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.376089 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.545163 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54597218-e332-4423-adc0-b4be2977a4ce-config-volume\") pod \"54597218-e332-4423-adc0-b4be2977a4ce\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.545209 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcbql\" (UniqueName: \"kubernetes.io/projected/54597218-e332-4423-adc0-b4be2977a4ce-kube-api-access-tcbql\") pod \"54597218-e332-4423-adc0-b4be2977a4ce\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.545261 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54597218-e332-4423-adc0-b4be2977a4ce-secret-volume\") pod \"54597218-e332-4423-adc0-b4be2977a4ce\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.545862 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54597218-e332-4423-adc0-b4be2977a4ce-config-volume" (OuterVolumeSpecName: "config-volume") pod "54597218-e332-4423-adc0-b4be2977a4ce" (UID: "54597218-e332-4423-adc0-b4be2977a4ce"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.546192 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54597218-e332-4423-adc0-b4be2977a4ce-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.549814 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54597218-e332-4423-adc0-b4be2977a4ce-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "54597218-e332-4423-adc0-b4be2977a4ce" (UID: "54597218-e332-4423-adc0-b4be2977a4ce"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.549816 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54597218-e332-4423-adc0-b4be2977a4ce-kube-api-access-tcbql" (OuterVolumeSpecName: "kube-api-access-tcbql") pod "54597218-e332-4423-adc0-b4be2977a4ce" (UID: "54597218-e332-4423-adc0-b4be2977a4ce"). InnerVolumeSpecName "kube-api-access-tcbql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.647456 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcbql\" (UniqueName: \"kubernetes.io/projected/54597218-e332-4423-adc0-b4be2977a4ce-kube-api-access-tcbql\") on node \"crc\" DevicePath \"\"" Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.647500 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54597218-e332-4423-adc0-b4be2977a4ce-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:00:04 crc kubenswrapper[4820]: I0221 07:00:04.153312 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" event={"ID":"54597218-e332-4423-adc0-b4be2977a4ce","Type":"ContainerDied","Data":"7ccb949d706acc9f1588c7ac95f689c87c2049502040f21e82a12613a5dc82d9"} Feb 21 07:00:04 crc kubenswrapper[4820]: I0221 07:00:04.153665 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ccb949d706acc9f1588c7ac95f689c87c2049502040f21e82a12613a5dc82d9" Feb 21 07:00:04 crc kubenswrapper[4820]: I0221 07:00:04.153400 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:05 crc kubenswrapper[4820]: I0221 07:00:05.821080 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 07:00:13 crc kubenswrapper[4820]: I0221 07:00:13.816436 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:00:13 crc kubenswrapper[4820]: I0221 07:00:13.816696 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:00:25 crc kubenswrapper[4820]: I0221 07:00:25.607840 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.260846 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq"] Feb 21 07:00:26 crc kubenswrapper[4820]: E0221 07:00:26.261091 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54597218-e332-4423-adc0-b4be2977a4ce" containerName="collect-profiles" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.261106 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="54597218-e332-4423-adc0-b4be2977a4ce" containerName="collect-profiles" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.261264 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="54597218-e332-4423-adc0-b4be2977a4ce" containerName="collect-profiles" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.261858 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.265163 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.265502 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8ljn7" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.280766 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-dr9qm"] Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.284224 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.286606 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq"] Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.289950 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.290986 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.340919 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2827f692-18f9-4d32-b7bd-636d595a008f-frr-startup\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341032 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-frr-sockets\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341078 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2827f692-18f9-4d32-b7bd-636d595a008f-metrics-certs\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5c8b64a-a6da-435e-a87d-bd397ad045a4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-jw8nq\" (UID: \"a5c8b64a-a6da-435e-a87d-bd397ad045a4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341182 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn4zl\" (UniqueName: \"kubernetes.io/projected/2827f692-18f9-4d32-b7bd-636d595a008f-kube-api-access-wn4zl\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341201 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p84mk\" (UniqueName: \"kubernetes.io/projected/a5c8b64a-a6da-435e-a87d-bd397ad045a4-kube-api-access-p84mk\") pod \"frr-k8s-webhook-server-78b44bf5bb-jw8nq\" (UID: \"a5c8b64a-a6da-435e-a87d-bd397ad045a4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341220 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-frr-conf\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341294 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-metrics\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341324 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-reloader\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.386511 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-cwv62"] Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.387293 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.390301 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.390485 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.391517 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.391790 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-cx7cq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.411606 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-jrcl5"] Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.412508 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.414522 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.431136 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-jrcl5"] Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442434 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-metrics\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442484 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-reloader\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442507 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2827f692-18f9-4d32-b7bd-636d595a008f-frr-startup\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442538 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-frr-sockets\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442554 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2827f692-18f9-4d32-b7bd-636d595a008f-metrics-certs\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442577 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5c8b64a-a6da-435e-a87d-bd397ad045a4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-jw8nq\" (UID: \"a5c8b64a-a6da-435e-a87d-bd397ad045a4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442604 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn4zl\" (UniqueName: \"kubernetes.io/projected/2827f692-18f9-4d32-b7bd-636d595a008f-kube-api-access-wn4zl\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442623 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p84mk\" (UniqueName: \"kubernetes.io/projected/a5c8b64a-a6da-435e-a87d-bd397ad045a4-kube-api-access-p84mk\") pod \"frr-k8s-webhook-server-78b44bf5bb-jw8nq\" (UID: \"a5c8b64a-a6da-435e-a87d-bd397ad045a4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442642 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-frr-conf\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.443017 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-frr-conf\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.443480 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-metrics\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.443655 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-reloader\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.444382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2827f692-18f9-4d32-b7bd-636d595a008f-frr-startup\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.444558 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-frr-sockets\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: E0221 07:00:26.445257 4820 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 21 07:00:26 crc kubenswrapper[4820]: E0221 07:00:26.445306 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c8b64a-a6da-435e-a87d-bd397ad045a4-cert podName:a5c8b64a-a6da-435e-a87d-bd397ad045a4 nodeName:}" failed. No retries permitted until 2026-02-21 07:00:26.945289987 +0000 UTC m=+801.978374185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a5c8b64a-a6da-435e-a87d-bd397ad045a4-cert") pod "frr-k8s-webhook-server-78b44bf5bb-jw8nq" (UID: "a5c8b64a-a6da-435e-a87d-bd397ad045a4") : secret "frr-k8s-webhook-server-cert" not found Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.452218 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2827f692-18f9-4d32-b7bd-636d595a008f-metrics-certs\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.482575 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p84mk\" (UniqueName: \"kubernetes.io/projected/a5c8b64a-a6da-435e-a87d-bd397ad045a4-kube-api-access-p84mk\") pod \"frr-k8s-webhook-server-78b44bf5bb-jw8nq\" (UID: \"a5c8b64a-a6da-435e-a87d-bd397ad045a4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.484896 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn4zl\" (UniqueName: \"kubernetes.io/projected/2827f692-18f9-4d32-b7bd-636d595a008f-kube-api-access-wn4zl\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.545094 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-metallb-excludel2\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.545226 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqtz5\" (UniqueName: \"kubernetes.io/projected/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-kube-api-access-zqtz5\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.545283 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f342ec6-aed8-48ff-a1ba-9d6634bda927-metrics-certs\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.545346 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-memberlist\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.545508 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-metrics-certs\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.545561 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9x2j\" (UniqueName: \"kubernetes.io/projected/6f342ec6-aed8-48ff-a1ba-9d6634bda927-kube-api-access-t9x2j\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.545620 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f342ec6-aed8-48ff-a1ba-9d6634bda927-cert\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.634606 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.647068 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f342ec6-aed8-48ff-a1ba-9d6634bda927-cert\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.647167 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-metallb-excludel2\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.648328 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-metallb-excludel2\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.648439 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqtz5\" (UniqueName: \"kubernetes.io/projected/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-kube-api-access-zqtz5\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.648523 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f342ec6-aed8-48ff-a1ba-9d6634bda927-metrics-certs\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.648552 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 21 07:00:26 crc kubenswrapper[4820]: E0221 07:00:26.648643 4820 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.648552 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-memberlist\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: E0221 07:00:26.648709 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-memberlist podName:cc577a47-69e2-4ae2-93c1-e922f0c6e3d8 nodeName:}" failed. No retries permitted until 2026-02-21 07:00:27.148694348 +0000 UTC m=+802.181778556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-memberlist") pod "speaker-cwv62" (UID: "cc577a47-69e2-4ae2-93c1-e922f0c6e3d8") : secret "metallb-memberlist" not found Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.648783 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-metrics-certs\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.648821 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9x2j\" (UniqueName: \"kubernetes.io/projected/6f342ec6-aed8-48ff-a1ba-9d6634bda927-kube-api-access-t9x2j\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.652260 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-metrics-certs\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.652393 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f342ec6-aed8-48ff-a1ba-9d6634bda927-metrics-certs\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.668273 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9x2j\" (UniqueName: \"kubernetes.io/projected/6f342ec6-aed8-48ff-a1ba-9d6634bda927-kube-api-access-t9x2j\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.670856 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f342ec6-aed8-48ff-a1ba-9d6634bda927-cert\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.672863 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqtz5\" (UniqueName: \"kubernetes.io/projected/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-kube-api-access-zqtz5\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.725016 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.898421 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-jrcl5"] Feb 21 07:00:26 crc kubenswrapper[4820]: W0221 07:00:26.903696 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f342ec6_aed8_48ff_a1ba_9d6634bda927.slice/crio-403a0145606b66351e07610573ec9cd781c13f8a21d583a4b1f94dfb735b5ec1 WatchSource:0}: Error finding container 403a0145606b66351e07610573ec9cd781c13f8a21d583a4b1f94dfb735b5ec1: Status 404 returned error can't find the container with id 403a0145606b66351e07610573ec9cd781c13f8a21d583a4b1f94dfb735b5ec1 Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.952669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5c8b64a-a6da-435e-a87d-bd397ad045a4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-jw8nq\" (UID: \"a5c8b64a-a6da-435e-a87d-bd397ad045a4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.957749 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5c8b64a-a6da-435e-a87d-bd397ad045a4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-jw8nq\" (UID: \"a5c8b64a-a6da-435e-a87d-bd397ad045a4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.154860 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-memberlist\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.158602 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-memberlist\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.180742 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.281058 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jrcl5" event={"ID":"6f342ec6-aed8-48ff-a1ba-9d6634bda927","Type":"ContainerStarted","Data":"62242f816db7627dd228ae881131350ead84f4174402a57ead25b88741c64144"} Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.281093 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jrcl5" event={"ID":"6f342ec6-aed8-48ff-a1ba-9d6634bda927","Type":"ContainerStarted","Data":"fd5a6053ed0b3d06cf7020e7625083fa1095dfed9938ffe55082a7de5922f93d"} Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.281102 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jrcl5" event={"ID":"6f342ec6-aed8-48ff-a1ba-9d6634bda927","Type":"ContainerStarted","Data":"403a0145606b66351e07610573ec9cd781c13f8a21d583a4b1f94dfb735b5ec1"} Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.281985 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.286044 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerStarted","Data":"95824013e35b2f466e23d5c8960a4feaac2ed23ab70a402f931b714e0782add1"} Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.298675 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cwv62" Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.301325 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-jrcl5" podStartSLOduration=1.3013063329999999 podStartE2EDuration="1.301306333s" podCreationTimestamp="2026-02-21 07:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:00:27.297442377 +0000 UTC m=+802.330526575" watchObservedRunningTime="2026-02-21 07:00:27.301306333 +0000 UTC m=+802.334390531" Feb 21 07:00:27 crc kubenswrapper[4820]: W0221 07:00:27.359018 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc577a47_69e2_4ae2_93c1_e922f0c6e3d8.slice/crio-81e4ae818232e21609ecc99c6e098bf55a0f004506e201635f47297aea6ff181 WatchSource:0}: Error finding container 81e4ae818232e21609ecc99c6e098bf55a0f004506e201635f47297aea6ff181: Status 404 returned error can't find the container with id 81e4ae818232e21609ecc99c6e098bf55a0f004506e201635f47297aea6ff181 Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.410749 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq"] Feb 21 07:00:28 crc kubenswrapper[4820]: I0221 07:00:28.295621 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cwv62" event={"ID":"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8","Type":"ContainerStarted","Data":"ea5a189303a35e30ead1f2f1e318066a9153121ea06eccd99be664bf632098d3"} Feb 21 07:00:28 crc kubenswrapper[4820]: I0221 07:00:28.295999 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cwv62" event={"ID":"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8","Type":"ContainerStarted","Data":"e365d5006e67a435f8e3bff1160849e85cf1eb8d9a8a4cd40ceb72f6d040e2ac"} Feb 21 07:00:28 crc kubenswrapper[4820]: I0221 07:00:28.296013 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cwv62" event={"ID":"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8","Type":"ContainerStarted","Data":"81e4ae818232e21609ecc99c6e098bf55a0f004506e201635f47297aea6ff181"} Feb 21 07:00:28 crc kubenswrapper[4820]: I0221 07:00:28.296319 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-cwv62" Feb 21 07:00:28 crc kubenswrapper[4820]: I0221 07:00:28.297556 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" event={"ID":"a5c8b64a-a6da-435e-a87d-bd397ad045a4","Type":"ContainerStarted","Data":"ff4a9ddfbbd46a0e40d00a33931ffc28ec5efedc97e4745ca4d93c2743a157fa"} Feb 21 07:00:28 crc kubenswrapper[4820]: I0221 07:00:28.319391 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-cwv62" podStartSLOduration=2.319371467 podStartE2EDuration="2.319371467s" podCreationTimestamp="2026-02-21 07:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:00:28.313924618 +0000 UTC m=+803.347008826" watchObservedRunningTime="2026-02-21 07:00:28.319371467 +0000 UTC m=+803.352455675" Feb 21 07:00:34 crc kubenswrapper[4820]: I0221 07:00:34.354775 4820 generic.go:334] "Generic (PLEG): container finished" podID="2827f692-18f9-4d32-b7bd-636d595a008f" containerID="cbc113af5c2a6bdea91d6463d7f779cb6340f744b430c5989069c49cee009dbe" exitCode=0 Feb 21 07:00:34 crc kubenswrapper[4820]: I0221 07:00:34.354909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerDied","Data":"cbc113af5c2a6bdea91d6463d7f779cb6340f744b430c5989069c49cee009dbe"} Feb 21 07:00:34 crc kubenswrapper[4820]: I0221 07:00:34.358159 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" event={"ID":"a5c8b64a-a6da-435e-a87d-bd397ad045a4","Type":"ContainerStarted","Data":"e447d16c88f1358b67cafc3ddcdeec7050e903c4de4b4ea3928cb89c1ceff4f8"} Feb 21 07:00:34 crc kubenswrapper[4820]: I0221 07:00:34.358329 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:34 crc kubenswrapper[4820]: I0221 07:00:34.412687 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" podStartSLOduration=2.396757596 podStartE2EDuration="8.412650219s" podCreationTimestamp="2026-02-21 07:00:26 +0000 UTC" firstStartedPulling="2026-02-21 07:00:27.409435644 +0000 UTC m=+802.442519842" lastFinishedPulling="2026-02-21 07:00:33.425328257 +0000 UTC m=+808.458412465" observedRunningTime="2026-02-21 07:00:34.400808665 +0000 UTC m=+809.433892883" watchObservedRunningTime="2026-02-21 07:00:34.412650219 +0000 UTC m=+809.445734457" Feb 21 07:00:35 crc kubenswrapper[4820]: I0221 07:00:35.368043 4820 generic.go:334] "Generic (PLEG): container finished" podID="2827f692-18f9-4d32-b7bd-636d595a008f" containerID="ada97ed03b106b3662de92fc179820a1b2bfc50befca899ecd5e19d02ad05eba" exitCode=0 Feb 21 07:00:35 crc kubenswrapper[4820]: I0221 07:00:35.368104 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerDied","Data":"ada97ed03b106b3662de92fc179820a1b2bfc50befca899ecd5e19d02ad05eba"} Feb 21 07:00:36 crc kubenswrapper[4820]: I0221 07:00:36.377150 4820 generic.go:334] "Generic (PLEG): container finished" podID="2827f692-18f9-4d32-b7bd-636d595a008f" containerID="57e6e5fddc66aad5c01dd29af6184c28a5f49fadf6eac0beced3eb6d80e678e7" exitCode=0 Feb 21 07:00:36 crc kubenswrapper[4820]: I0221 07:00:36.377196 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerDied","Data":"57e6e5fddc66aad5c01dd29af6184c28a5f49fadf6eac0beced3eb6d80e678e7"} Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.302193 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-cwv62" Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.388638 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerStarted","Data":"3597e26d481415fd9774aaee5b50fbf16caf0b85a46d877281ea04cc0f723f6c"} Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.388677 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerStarted","Data":"bd1b6f9de77e4cf318538404d88d743002bcf5227f3fed5cbf04591d64e8575a"} Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.388686 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerStarted","Data":"78d0050cea8ddaecf61b0ec375254ab380c0ed9e64feef093223b8e7af31e624"} Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.388694 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerStarted","Data":"1e0ccc996b64955392d8e84e98400d4e7cbdea6037b12f29e3df960c82e93ffc"} Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.388702 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerStarted","Data":"f5d43d720b554891a559e1ae4e21b5093a52d4911b630956d717d3897200be4e"} Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.388710 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerStarted","Data":"13e5076f7983f2e11d2466556088039a81801da970e145d79d3e7cfda2f20cd1"} Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.388807 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.413913 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-dr9qm" podStartSLOduration=4.778632805 podStartE2EDuration="11.413890862s" podCreationTimestamp="2026-02-21 07:00:26 +0000 UTC" firstStartedPulling="2026-02-21 07:00:26.776813547 +0000 UTC m=+801.809897745" lastFinishedPulling="2026-02-21 07:00:33.412071574 +0000 UTC m=+808.445155802" observedRunningTime="2026-02-21 07:00:37.410254352 +0000 UTC m=+812.443338570" watchObservedRunningTime="2026-02-21 07:00:37.413890862 +0000 UTC m=+812.446975060" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.663446 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7"] Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.665326 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.668300 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.675012 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7"] Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.712732 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hxqx\" (UniqueName: \"kubernetes.io/projected/a332a364-5157-4e4a-8313-7b267a41ac97-kube-api-access-6hxqx\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.712898 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.712992 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.813485 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hxqx\" (UniqueName: \"kubernetes.io/projected/a332a364-5157-4e4a-8313-7b267a41ac97-kube-api-access-6hxqx\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.813526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.813547 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.814056 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.814101 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.832627 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hxqx\" (UniqueName: \"kubernetes.io/projected/a332a364-5157-4e4a-8313-7b267a41ac97-kube-api-access-6hxqx\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:39 crc kubenswrapper[4820]: I0221 07:00:39.012775 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:39 crc kubenswrapper[4820]: I0221 07:00:39.209569 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7"] Feb 21 07:00:39 crc kubenswrapper[4820]: I0221 07:00:39.403745 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" event={"ID":"a332a364-5157-4e4a-8313-7b267a41ac97","Type":"ContainerStarted","Data":"9d75a6abbe2c6d4a98aa091f1cede7bbf555078b65f976d441dd58e409620f47"} Feb 21 07:00:39 crc kubenswrapper[4820]: I0221 07:00:39.403787 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" event={"ID":"a332a364-5157-4e4a-8313-7b267a41ac97","Type":"ContainerStarted","Data":"0c6e07eeb7f809ed6f9e19a012c146b3f11c5eab6437acfc6fbff21d36902515"} Feb 21 07:00:40 crc kubenswrapper[4820]: I0221 07:00:40.410852 4820 generic.go:334] "Generic (PLEG): container finished" podID="a332a364-5157-4e4a-8313-7b267a41ac97" containerID="9d75a6abbe2c6d4a98aa091f1cede7bbf555078b65f976d441dd58e409620f47" exitCode=0 Feb 21 07:00:40 crc kubenswrapper[4820]: I0221 07:00:40.410891 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" event={"ID":"a332a364-5157-4e4a-8313-7b267a41ac97","Type":"ContainerDied","Data":"9d75a6abbe2c6d4a98aa091f1cede7bbf555078b65f976d441dd58e409620f47"} Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.023966 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pcnnk"] Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.026079 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.041365 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pcnnk"] Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.054041 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dzwf\" (UniqueName: \"kubernetes.io/projected/b23588b8-ba46-4a3b-8f44-22c46230f838-kube-api-access-2dzwf\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.054136 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-utilities\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.054165 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-catalog-content\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.155817 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-catalog-content\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.156084 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dzwf\" (UniqueName: \"kubernetes.io/projected/b23588b8-ba46-4a3b-8f44-22c46230f838-kube-api-access-2dzwf\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.156214 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-utilities\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.156497 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-catalog-content\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.156641 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-utilities\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.176151 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dzwf\" (UniqueName: \"kubernetes.io/projected/b23588b8-ba46-4a3b-8f44-22c46230f838-kube-api-access-2dzwf\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.363761 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.635597 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.681751 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.800104 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pcnnk"] Feb 21 07:00:41 crc kubenswrapper[4820]: W0221 07:00:41.805554 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb23588b8_ba46_4a3b_8f44_22c46230f838.slice/crio-9fac7c444fcd12fd7457c0cac7f26e876fef1e07d2e9d09e4ec96630d3fa592c WatchSource:0}: Error finding container 9fac7c444fcd12fd7457c0cac7f26e876fef1e07d2e9d09e4ec96630d3fa592c: Status 404 returned error can't find the container with id 9fac7c444fcd12fd7457c0cac7f26e876fef1e07d2e9d09e4ec96630d3fa592c Feb 21 07:00:42 crc kubenswrapper[4820]: I0221 07:00:42.424373 4820 generic.go:334] "Generic (PLEG): container finished" podID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerID="b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c" exitCode=0 Feb 21 07:00:42 crc kubenswrapper[4820]: I0221 07:00:42.424425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcnnk" event={"ID":"b23588b8-ba46-4a3b-8f44-22c46230f838","Type":"ContainerDied","Data":"b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c"} Feb 21 07:00:42 crc kubenswrapper[4820]: I0221 07:00:42.424739 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcnnk" event={"ID":"b23588b8-ba46-4a3b-8f44-22c46230f838","Type":"ContainerStarted","Data":"9fac7c444fcd12fd7457c0cac7f26e876fef1e07d2e9d09e4ec96630d3fa592c"} Feb 21 07:00:43 crc kubenswrapper[4820]: I0221 07:00:43.816329 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:00:43 crc kubenswrapper[4820]: I0221 07:00:43.816392 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:00:44 crc kubenswrapper[4820]: I0221 07:00:44.436829 4820 generic.go:334] "Generic (PLEG): container finished" podID="a332a364-5157-4e4a-8313-7b267a41ac97" containerID="d6e1b1acd64b121b18426115b034bc07a2c112c32661f175e5cb3efb706dfb9c" exitCode=0 Feb 21 07:00:44 crc kubenswrapper[4820]: I0221 07:00:44.436908 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" event={"ID":"a332a364-5157-4e4a-8313-7b267a41ac97","Type":"ContainerDied","Data":"d6e1b1acd64b121b18426115b034bc07a2c112c32661f175e5cb3efb706dfb9c"} Feb 21 07:00:44 crc kubenswrapper[4820]: I0221 07:00:44.446743 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcnnk" event={"ID":"b23588b8-ba46-4a3b-8f44-22c46230f838","Type":"ContainerStarted","Data":"59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac"} Feb 21 07:00:44 crc kubenswrapper[4820]: E0221 07:00:44.873764 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb23588b8_ba46_4a3b_8f44_22c46230f838.slice/crio-conmon-59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac.scope\": RecentStats: unable to find data in memory cache]" Feb 21 07:00:45 crc kubenswrapper[4820]: I0221 07:00:45.456355 4820 generic.go:334] "Generic (PLEG): container finished" podID="a332a364-5157-4e4a-8313-7b267a41ac97" containerID="27047234a26ed8c4476270f8c583712d5171722ca88352bd3ab0081bcd984f39" exitCode=0 Feb 21 07:00:45 crc kubenswrapper[4820]: I0221 07:00:45.456464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" event={"ID":"a332a364-5157-4e4a-8313-7b267a41ac97","Type":"ContainerDied","Data":"27047234a26ed8c4476270f8c583712d5171722ca88352bd3ab0081bcd984f39"} Feb 21 07:00:45 crc kubenswrapper[4820]: I0221 07:00:45.458902 4820 generic.go:334] "Generic (PLEG): container finished" podID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerID="59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac" exitCode=0 Feb 21 07:00:45 crc kubenswrapper[4820]: I0221 07:00:45.458955 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcnnk" event={"ID":"b23588b8-ba46-4a3b-8f44-22c46230f838","Type":"ContainerDied","Data":"59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac"} Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.482750 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcnnk" event={"ID":"b23588b8-ba46-4a3b-8f44-22c46230f838","Type":"ContainerStarted","Data":"2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed"} Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.502701 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pcnnk" podStartSLOduration=1.973792804 podStartE2EDuration="5.50268306s" podCreationTimestamp="2026-02-21 07:00:41 +0000 UTC" firstStartedPulling="2026-02-21 07:00:42.426011301 +0000 UTC m=+817.459095499" lastFinishedPulling="2026-02-21 07:00:45.954901557 +0000 UTC m=+820.987985755" observedRunningTime="2026-02-21 07:00:46.500640733 +0000 UTC m=+821.533724941" watchObservedRunningTime="2026-02-21 07:00:46.50268306 +0000 UTC m=+821.535767258" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.641454 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.729395 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.753016 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.860444 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hxqx\" (UniqueName: \"kubernetes.io/projected/a332a364-5157-4e4a-8313-7b267a41ac97-kube-api-access-6hxqx\") pod \"a332a364-5157-4e4a-8313-7b267a41ac97\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.860482 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-util\") pod \"a332a364-5157-4e4a-8313-7b267a41ac97\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.860562 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-bundle\") pod \"a332a364-5157-4e4a-8313-7b267a41ac97\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.861750 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-bundle" (OuterVolumeSpecName: "bundle") pod "a332a364-5157-4e4a-8313-7b267a41ac97" (UID: "a332a364-5157-4e4a-8313-7b267a41ac97"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.866665 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a332a364-5157-4e4a-8313-7b267a41ac97-kube-api-access-6hxqx" (OuterVolumeSpecName: "kube-api-access-6hxqx") pod "a332a364-5157-4e4a-8313-7b267a41ac97" (UID: "a332a364-5157-4e4a-8313-7b267a41ac97"). InnerVolumeSpecName "kube-api-access-6hxqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.870163 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-util" (OuterVolumeSpecName: "util") pod "a332a364-5157-4e4a-8313-7b267a41ac97" (UID: "a332a364-5157-4e4a-8313-7b267a41ac97"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.961772 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hxqx\" (UniqueName: \"kubernetes.io/projected/a332a364-5157-4e4a-8313-7b267a41ac97-kube-api-access-6hxqx\") on node \"crc\" DevicePath \"\"" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.961804 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-util\") on node \"crc\" DevicePath \"\"" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.961814 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:00:47 crc kubenswrapper[4820]: I0221 07:00:47.187390 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:47 crc kubenswrapper[4820]: I0221 07:00:47.489880 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:47 crc kubenswrapper[4820]: I0221 07:00:47.489876 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" event={"ID":"a332a364-5157-4e4a-8313-7b267a41ac97","Type":"ContainerDied","Data":"0c6e07eeb7f809ed6f9e19a012c146b3f11c5eab6437acfc6fbff21d36902515"} Feb 21 07:00:47 crc kubenswrapper[4820]: I0221 07:00:47.489925 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c6e07eeb7f809ed6f9e19a012c146b3f11c5eab6437acfc6fbff21d36902515" Feb 21 07:00:51 crc kubenswrapper[4820]: I0221 07:00:51.364627 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:51 crc kubenswrapper[4820]: I0221 07:00:51.364889 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.270251 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f"] Feb 21 07:00:52 crc kubenswrapper[4820]: E0221 07:00:52.271001 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a332a364-5157-4e4a-8313-7b267a41ac97" containerName="util" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.271122 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a332a364-5157-4e4a-8313-7b267a41ac97" containerName="util" Feb 21 07:00:52 crc kubenswrapper[4820]: E0221 07:00:52.271207 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a332a364-5157-4e4a-8313-7b267a41ac97" containerName="extract" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.271302 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a332a364-5157-4e4a-8313-7b267a41ac97" containerName="extract" Feb 21 07:00:52 crc kubenswrapper[4820]: E0221 07:00:52.271481 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a332a364-5157-4e4a-8313-7b267a41ac97" containerName="pull" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.271565 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a332a364-5157-4e4a-8313-7b267a41ac97" containerName="pull" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.271898 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a332a364-5157-4e4a-8313-7b267a41ac97" containerName="extract" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.272720 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.276762 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.276830 4820 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-sw5zc" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.276842 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.314078 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f"] Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.326399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d94w6\" (UniqueName: \"kubernetes.io/projected/4ae6b64f-6c78-415f-b36e-e9cf9ec722dd-kube-api-access-d94w6\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lkl7f\" (UID: \"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.326470 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b64f-6c78-415f-b36e-e9cf9ec722dd-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lkl7f\" (UID: \"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.423679 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pcnnk" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="registry-server" probeResult="failure" output=< Feb 21 07:00:52 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 07:00:52 crc kubenswrapper[4820]: > Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.427481 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d94w6\" (UniqueName: \"kubernetes.io/projected/4ae6b64f-6c78-415f-b36e-e9cf9ec722dd-kube-api-access-d94w6\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lkl7f\" (UID: \"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.427534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b64f-6c78-415f-b36e-e9cf9ec722dd-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lkl7f\" (UID: \"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.427899 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b64f-6c78-415f-b36e-e9cf9ec722dd-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lkl7f\" (UID: \"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.443953 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d94w6\" (UniqueName: \"kubernetes.io/projected/4ae6b64f-6c78-415f-b36e-e9cf9ec722dd-kube-api-access-d94w6\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lkl7f\" (UID: \"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.589143 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:53 crc kubenswrapper[4820]: I0221 07:00:53.058176 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f"] Feb 21 07:00:53 crc kubenswrapper[4820]: W0221 07:00:53.060172 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ae6b64f_6c78_415f_b36e_e9cf9ec722dd.slice/crio-4b5ef0f5bb12719ff440ef303f0804c0a1b4b503df71bc552f8ddc13d8f4a8cc WatchSource:0}: Error finding container 4b5ef0f5bb12719ff440ef303f0804c0a1b4b503df71bc552f8ddc13d8f4a8cc: Status 404 returned error can't find the container with id 4b5ef0f5bb12719ff440ef303f0804c0a1b4b503df71bc552f8ddc13d8f4a8cc Feb 21 07:00:53 crc kubenswrapper[4820]: I0221 07:00:53.525345 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" event={"ID":"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd","Type":"ContainerStarted","Data":"4b5ef0f5bb12719ff440ef303f0804c0a1b4b503df71bc552f8ddc13d8f4a8cc"} Feb 21 07:00:56 crc kubenswrapper[4820]: I0221 07:00:56.548451 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" event={"ID":"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd","Type":"ContainerStarted","Data":"9f73b6d49d299652d52c831672834a19d4f38ca1dee8dfc350ec43f28812821f"} Feb 21 07:00:56 crc kubenswrapper[4820]: I0221 07:00:56.568212 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" podStartSLOduration=1.285433727 podStartE2EDuration="4.568197981s" podCreationTimestamp="2026-02-21 07:00:52 +0000 UTC" firstStartedPulling="2026-02-21 07:00:53.062664505 +0000 UTC m=+828.095748723" lastFinishedPulling="2026-02-21 07:00:56.345428779 +0000 UTC m=+831.378512977" observedRunningTime="2026-02-21 07:00:56.56745323 +0000 UTC m=+831.600537428" watchObservedRunningTime="2026-02-21 07:00:56.568197981 +0000 UTC m=+831.601282179" Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.900264 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-r5ddv"] Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.902960 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.912026 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.912380 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.912890 4820 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-8rglc" Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.923047 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-r5ddv"] Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.935487 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55fgs\" (UniqueName: \"kubernetes.io/projected/e88f2404-d287-429a-a995-ea8be7fa5be8-kube-api-access-55fgs\") pod \"cert-manager-webhook-6888856db4-r5ddv\" (UID: \"e88f2404-d287-429a-a995-ea8be7fa5be8\") " pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.935794 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e88f2404-d287-429a-a995-ea8be7fa5be8-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-r5ddv\" (UID: \"e88f2404-d287-429a-a995-ea8be7fa5be8\") " pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.036690 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e88f2404-d287-429a-a995-ea8be7fa5be8-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-r5ddv\" (UID: \"e88f2404-d287-429a-a995-ea8be7fa5be8\") " pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.037268 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55fgs\" (UniqueName: \"kubernetes.io/projected/e88f2404-d287-429a-a995-ea8be7fa5be8-kube-api-access-55fgs\") pod \"cert-manager-webhook-6888856db4-r5ddv\" (UID: \"e88f2404-d287-429a-a995-ea8be7fa5be8\") " pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.062151 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e88f2404-d287-429a-a995-ea8be7fa5be8-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-r5ddv\" (UID: \"e88f2404-d287-429a-a995-ea8be7fa5be8\") " pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.067364 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55fgs\" (UniqueName: \"kubernetes.io/projected/e88f2404-d287-429a-a995-ea8be7fa5be8-kube-api-access-55fgs\") pod \"cert-manager-webhook-6888856db4-r5ddv\" (UID: \"e88f2404-d287-429a-a995-ea8be7fa5be8\") " pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.224464 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.374196 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-9lkfz"] Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.374879 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.381272 4820 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-jskkw" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.388603 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-9lkfz"] Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.426580 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.441903 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a53f347-c86d-4ef3-82c2-29549135afe6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-9lkfz\" (UID: \"3a53f347-c86d-4ef3-82c2-29549135afe6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.442052 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgk8k\" (UniqueName: \"kubernetes.io/projected/3a53f347-c86d-4ef3-82c2-29549135afe6-kube-api-access-fgk8k\") pod \"cert-manager-cainjector-5545bd876-9lkfz\" (UID: \"3a53f347-c86d-4ef3-82c2-29549135afe6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.464400 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.543532 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgk8k\" (UniqueName: \"kubernetes.io/projected/3a53f347-c86d-4ef3-82c2-29549135afe6-kube-api-access-fgk8k\") pod \"cert-manager-cainjector-5545bd876-9lkfz\" (UID: \"3a53f347-c86d-4ef3-82c2-29549135afe6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.543616 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a53f347-c86d-4ef3-82c2-29549135afe6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-9lkfz\" (UID: \"3a53f347-c86d-4ef3-82c2-29549135afe6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.559891 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a53f347-c86d-4ef3-82c2-29549135afe6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-9lkfz\" (UID: \"3a53f347-c86d-4ef3-82c2-29549135afe6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.560873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgk8k\" (UniqueName: \"kubernetes.io/projected/3a53f347-c86d-4ef3-82c2-29549135afe6-kube-api-access-fgk8k\") pod \"cert-manager-cainjector-5545bd876-9lkfz\" (UID: \"3a53f347-c86d-4ef3-82c2-29549135afe6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.637289 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-r5ddv"] Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.706161 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.810074 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pcnnk"] Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.890803 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-9lkfz"] Feb 21 07:01:01 crc kubenswrapper[4820]: W0221 07:01:01.899668 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a53f347_c86d_4ef3_82c2_29549135afe6.slice/crio-1adf2b48cc4c435f04a3b81d2b2e7ccb1783451726bb0b159954f8949b992bfb WatchSource:0}: Error finding container 1adf2b48cc4c435f04a3b81d2b2e7ccb1783451726bb0b159954f8949b992bfb: Status 404 returned error can't find the container with id 1adf2b48cc4c435f04a3b81d2b2e7ccb1783451726bb0b159954f8949b992bfb Feb 21 07:01:02 crc kubenswrapper[4820]: I0221 07:01:02.579158 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" event={"ID":"e88f2404-d287-429a-a995-ea8be7fa5be8","Type":"ContainerStarted","Data":"816b6cef803ab810406e3a0facf81dc80e95f6624d7321feba9ece3aa038238b"} Feb 21 07:01:02 crc kubenswrapper[4820]: I0221 07:01:02.580437 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" event={"ID":"3a53f347-c86d-4ef3-82c2-29549135afe6","Type":"ContainerStarted","Data":"1adf2b48cc4c435f04a3b81d2b2e7ccb1783451726bb0b159954f8949b992bfb"} Feb 21 07:01:02 crc kubenswrapper[4820]: I0221 07:01:02.580594 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pcnnk" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="registry-server" containerID="cri-o://2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed" gracePeriod=2 Feb 21 07:01:02 crc kubenswrapper[4820]: I0221 07:01:02.938196 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.061355 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-catalog-content\") pod \"b23588b8-ba46-4a3b-8f44-22c46230f838\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.061512 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dzwf\" (UniqueName: \"kubernetes.io/projected/b23588b8-ba46-4a3b-8f44-22c46230f838-kube-api-access-2dzwf\") pod \"b23588b8-ba46-4a3b-8f44-22c46230f838\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.061611 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-utilities\") pod \"b23588b8-ba46-4a3b-8f44-22c46230f838\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.062873 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-utilities" (OuterVolumeSpecName: "utilities") pod "b23588b8-ba46-4a3b-8f44-22c46230f838" (UID: "b23588b8-ba46-4a3b-8f44-22c46230f838"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.067130 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23588b8-ba46-4a3b-8f44-22c46230f838-kube-api-access-2dzwf" (OuterVolumeSpecName: "kube-api-access-2dzwf") pod "b23588b8-ba46-4a3b-8f44-22c46230f838" (UID: "b23588b8-ba46-4a3b-8f44-22c46230f838"). InnerVolumeSpecName "kube-api-access-2dzwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.163223 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dzwf\" (UniqueName: \"kubernetes.io/projected/b23588b8-ba46-4a3b-8f44-22c46230f838-kube-api-access-2dzwf\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.163273 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.190644 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b23588b8-ba46-4a3b-8f44-22c46230f838" (UID: "b23588b8-ba46-4a3b-8f44-22c46230f838"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.264972 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.591054 4820 generic.go:334] "Generic (PLEG): container finished" podID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerID="2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed" exitCode=0 Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.591123 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcnnk" event={"ID":"b23588b8-ba46-4a3b-8f44-22c46230f838","Type":"ContainerDied","Data":"2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed"} Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.591184 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcnnk" event={"ID":"b23588b8-ba46-4a3b-8f44-22c46230f838","Type":"ContainerDied","Data":"9fac7c444fcd12fd7457c0cac7f26e876fef1e07d2e9d09e4ec96630d3fa592c"} Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.591209 4820 scope.go:117] "RemoveContainer" containerID="2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.591406 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.618303 4820 scope.go:117] "RemoveContainer" containerID="59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.622737 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pcnnk"] Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.626905 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pcnnk"] Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.667779 4820 scope.go:117] "RemoveContainer" containerID="b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.682475 4820 scope.go:117] "RemoveContainer" containerID="2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed" Feb 21 07:01:03 crc kubenswrapper[4820]: E0221 07:01:03.682914 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed\": container with ID starting with 2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed not found: ID does not exist" containerID="2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.682950 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed"} err="failed to get container status \"2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed\": rpc error: code = NotFound desc = could not find container \"2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed\": container with ID starting with 2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed not found: ID does not exist" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.682974 4820 scope.go:117] "RemoveContainer" containerID="59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac" Feb 21 07:01:03 crc kubenswrapper[4820]: E0221 07:01:03.683408 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac\": container with ID starting with 59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac not found: ID does not exist" containerID="59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.683433 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac"} err="failed to get container status \"59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac\": rpc error: code = NotFound desc = could not find container \"59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac\": container with ID starting with 59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac not found: ID does not exist" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.683445 4820 scope.go:117] "RemoveContainer" containerID="b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c" Feb 21 07:01:03 crc kubenswrapper[4820]: E0221 07:01:03.683668 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c\": container with ID starting with b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c not found: ID does not exist" containerID="b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.683693 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c"} err="failed to get container status \"b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c\": rpc error: code = NotFound desc = could not find container \"b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c\": container with ID starting with b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c not found: ID does not exist" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.707936 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" path="/var/lib/kubelet/pods/b23588b8-ba46-4a3b-8f44-22c46230f838/volumes" Feb 21 07:01:06 crc kubenswrapper[4820]: I0221 07:01:06.610497 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" event={"ID":"3a53f347-c86d-4ef3-82c2-29549135afe6","Type":"ContainerStarted","Data":"5c89fe3bf3a643bccbbb04a348389c5348d44f109ff4137b8920630f8b4d6dab"} Feb 21 07:01:06 crc kubenswrapper[4820]: I0221 07:01:06.612214 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" event={"ID":"e88f2404-d287-429a-a995-ea8be7fa5be8","Type":"ContainerStarted","Data":"7c96626874b9236e7e013698d8108ac3faf0bc4b66a40d0e23399c8a5692a8dd"} Feb 21 07:01:06 crc kubenswrapper[4820]: I0221 07:01:06.612394 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:06 crc kubenswrapper[4820]: I0221 07:01:06.624552 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" podStartSLOduration=1.456616681 podStartE2EDuration="5.624535379s" podCreationTimestamp="2026-02-21 07:01:01 +0000 UTC" firstStartedPulling="2026-02-21 07:01:01.902871235 +0000 UTC m=+836.935955433" lastFinishedPulling="2026-02-21 07:01:06.070789913 +0000 UTC m=+841.103874131" observedRunningTime="2026-02-21 07:01:06.623512671 +0000 UTC m=+841.656596869" watchObservedRunningTime="2026-02-21 07:01:06.624535379 +0000 UTC m=+841.657619577" Feb 21 07:01:06 crc kubenswrapper[4820]: I0221 07:01:06.641616 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" podStartSLOduration=2.243895405 podStartE2EDuration="6.641596016s" podCreationTimestamp="2026-02-21 07:01:00 +0000 UTC" firstStartedPulling="2026-02-21 07:01:01.646086452 +0000 UTC m=+836.679170650" lastFinishedPulling="2026-02-21 07:01:06.043787063 +0000 UTC m=+841.076871261" observedRunningTime="2026-02-21 07:01:06.638175123 +0000 UTC m=+841.671259311" watchObservedRunningTime="2026-02-21 07:01:06.641596016 +0000 UTC m=+841.674680214" Feb 21 07:01:11 crc kubenswrapper[4820]: I0221 07:01:11.227405 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:13 crc kubenswrapper[4820]: I0221 07:01:13.815979 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:01:13 crc kubenswrapper[4820]: I0221 07:01:13.816107 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:01:13 crc kubenswrapper[4820]: I0221 07:01:13.816200 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:01:13 crc kubenswrapper[4820]: I0221 07:01:13.817272 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71784da7c98d1c6a1f3631b050c692e6a08e77f49190060892784c827a17df19"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:01:13 crc kubenswrapper[4820]: I0221 07:01:13.817367 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://71784da7c98d1c6a1f3631b050c692e6a08e77f49190060892784c827a17df19" gracePeriod=600 Feb 21 07:01:14 crc kubenswrapper[4820]: I0221 07:01:14.660023 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="71784da7c98d1c6a1f3631b050c692e6a08e77f49190060892784c827a17df19" exitCode=0 Feb 21 07:01:14 crc kubenswrapper[4820]: I0221 07:01:14.660090 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"71784da7c98d1c6a1f3631b050c692e6a08e77f49190060892784c827a17df19"} Feb 21 07:01:14 crc kubenswrapper[4820]: I0221 07:01:14.660695 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"a0f682e2000efd774d622a3e32ffb0bf77aef757862422932fda82c7a3e96b5c"} Feb 21 07:01:14 crc kubenswrapper[4820]: I0221 07:01:14.660726 4820 scope.go:117] "RemoveContainer" containerID="3fc9b08aad2edad9a74ca93f30446b530336f95338e8d6ab6b9d614b704623df" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.838255 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-mf4f6"] Feb 21 07:01:19 crc kubenswrapper[4820]: E0221 07:01:19.839741 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="extract-content" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.839766 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="extract-content" Feb 21 07:01:19 crc kubenswrapper[4820]: E0221 07:01:19.839787 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="registry-server" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.839799 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="registry-server" Feb 21 07:01:19 crc kubenswrapper[4820]: E0221 07:01:19.839818 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="extract-utilities" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.839830 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="extract-utilities" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.840066 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="registry-server" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.840960 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.846339 4820 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cpqld" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.848544 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-mf4f6"] Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.972894 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d35515e4-d029-4f6a-be2a-d7ea32ab06ad-bound-sa-token\") pod \"cert-manager-545d4d4674-mf4f6\" (UID: \"d35515e4-d029-4f6a-be2a-d7ea32ab06ad\") " pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.972974 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrw57\" (UniqueName: \"kubernetes.io/projected/d35515e4-d029-4f6a-be2a-d7ea32ab06ad-kube-api-access-mrw57\") pod \"cert-manager-545d4d4674-mf4f6\" (UID: \"d35515e4-d029-4f6a-be2a-d7ea32ab06ad\") " pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.074893 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d35515e4-d029-4f6a-be2a-d7ea32ab06ad-bound-sa-token\") pod \"cert-manager-545d4d4674-mf4f6\" (UID: \"d35515e4-d029-4f6a-be2a-d7ea32ab06ad\") " pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.075091 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrw57\" (UniqueName: \"kubernetes.io/projected/d35515e4-d029-4f6a-be2a-d7ea32ab06ad-kube-api-access-mrw57\") pod \"cert-manager-545d4d4674-mf4f6\" (UID: \"d35515e4-d029-4f6a-be2a-d7ea32ab06ad\") " pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.093577 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d35515e4-d029-4f6a-be2a-d7ea32ab06ad-bound-sa-token\") pod \"cert-manager-545d4d4674-mf4f6\" (UID: \"d35515e4-d029-4f6a-be2a-d7ea32ab06ad\") " pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.094160 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrw57\" (UniqueName: \"kubernetes.io/projected/d35515e4-d029-4f6a-be2a-d7ea32ab06ad-kube-api-access-mrw57\") pod \"cert-manager-545d4d4674-mf4f6\" (UID: \"d35515e4-d029-4f6a-be2a-d7ea32ab06ad\") " pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.171005 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.581538 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-mf4f6"] Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.703069 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-mf4f6" event={"ID":"d35515e4-d029-4f6a-be2a-d7ea32ab06ad","Type":"ContainerStarted","Data":"77ff3c1f0a5372fa4f674e5a86a0a60ed238ef00f14e5b2435672fbd5710a013"} Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.703124 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-mf4f6" event={"ID":"d35515e4-d029-4f6a-be2a-d7ea32ab06ad","Type":"ContainerStarted","Data":"dee5a75bbc13d8d2435b00414c5efd00a0442515f8b7a6d872031937e4ee7953"} Feb 21 07:01:21 crc kubenswrapper[4820]: I0221 07:01:21.724834 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-mf4f6" podStartSLOduration=2.724819222 podStartE2EDuration="2.724819222s" podCreationTimestamp="2026-02-21 07:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:01:21.722363515 +0000 UTC m=+856.755447713" watchObservedRunningTime="2026-02-21 07:01:21.724819222 +0000 UTC m=+856.757903410" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.020362 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zxv8h"] Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.021635 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zxv8h" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.024045 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8g7ps" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.024125 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.024775 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.030119 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zxv8h"] Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.109968 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv2l7\" (UniqueName: \"kubernetes.io/projected/6f586a4e-5100-444f-8e11-0d6f785eac00-kube-api-access-sv2l7\") pod \"openstack-operator-index-zxv8h\" (UID: \"6f586a4e-5100-444f-8e11-0d6f785eac00\") " pod="openstack-operators/openstack-operator-index-zxv8h" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.210798 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv2l7\" (UniqueName: \"kubernetes.io/projected/6f586a4e-5100-444f-8e11-0d6f785eac00-kube-api-access-sv2l7\") pod \"openstack-operator-index-zxv8h\" (UID: \"6f586a4e-5100-444f-8e11-0d6f785eac00\") " pod="openstack-operators/openstack-operator-index-zxv8h" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.227860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv2l7\" (UniqueName: \"kubernetes.io/projected/6f586a4e-5100-444f-8e11-0d6f785eac00-kube-api-access-sv2l7\") pod \"openstack-operator-index-zxv8h\" (UID: \"6f586a4e-5100-444f-8e11-0d6f785eac00\") " pod="openstack-operators/openstack-operator-index-zxv8h" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.343677 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zxv8h" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.850439 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zxv8h"] Feb 21 07:01:25 crc kubenswrapper[4820]: W0221 07:01:25.857005 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f586a4e_5100_444f_8e11_0d6f785eac00.slice/crio-42a38d966e205222202a7707ec4ffc33aa4152d744c2acd52a8c24e5ab40141f WatchSource:0}: Error finding container 42a38d966e205222202a7707ec4ffc33aa4152d744c2acd52a8c24e5ab40141f: Status 404 returned error can't find the container with id 42a38d966e205222202a7707ec4ffc33aa4152d744c2acd52a8c24e5ab40141f Feb 21 07:01:26 crc kubenswrapper[4820]: I0221 07:01:26.736417 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zxv8h" event={"ID":"6f586a4e-5100-444f-8e11-0d6f785eac00","Type":"ContainerStarted","Data":"42a38d966e205222202a7707ec4ffc33aa4152d744c2acd52a8c24e5ab40141f"} Feb 21 07:01:28 crc kubenswrapper[4820]: I0221 07:01:28.403021 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zxv8h"] Feb 21 07:01:28 crc kubenswrapper[4820]: I0221 07:01:28.748478 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zxv8h" event={"ID":"6f586a4e-5100-444f-8e11-0d6f785eac00","Type":"ContainerStarted","Data":"f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329"} Feb 21 07:01:28 crc kubenswrapper[4820]: I0221 07:01:28.769157 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zxv8h" podStartSLOduration=1.7785773649999999 podStartE2EDuration="3.769132118s" podCreationTimestamp="2026-02-21 07:01:25 +0000 UTC" firstStartedPulling="2026-02-21 07:01:25.859606602 +0000 UTC m=+860.892690800" lastFinishedPulling="2026-02-21 07:01:27.850161315 +0000 UTC m=+862.883245553" observedRunningTime="2026-02-21 07:01:28.762000493 +0000 UTC m=+863.795084691" watchObservedRunningTime="2026-02-21 07:01:28.769132118 +0000 UTC m=+863.802216316" Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.014445 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tt62z"] Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.015672 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.043099 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tt62z"] Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.174419 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk8dh\" (UniqueName: \"kubernetes.io/projected/2af934a2-6680-4932-b3af-5f8bdee6c740-kube-api-access-vk8dh\") pod \"openstack-operator-index-tt62z\" (UID: \"2af934a2-6680-4932-b3af-5f8bdee6c740\") " pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.275484 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk8dh\" (UniqueName: \"kubernetes.io/projected/2af934a2-6680-4932-b3af-5f8bdee6c740-kube-api-access-vk8dh\") pod \"openstack-operator-index-tt62z\" (UID: \"2af934a2-6680-4932-b3af-5f8bdee6c740\") " pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.302924 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk8dh\" (UniqueName: \"kubernetes.io/projected/2af934a2-6680-4932-b3af-5f8bdee6c740-kube-api-access-vk8dh\") pod \"openstack-operator-index-tt62z\" (UID: \"2af934a2-6680-4932-b3af-5f8bdee6c740\") " pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.389577 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.757684 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-zxv8h" podUID="6f586a4e-5100-444f-8e11-0d6f785eac00" containerName="registry-server" containerID="cri-o://f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329" gracePeriod=2 Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.776769 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tt62z"] Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.052499 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zxv8h" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.188347 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv2l7\" (UniqueName: \"kubernetes.io/projected/6f586a4e-5100-444f-8e11-0d6f785eac00-kube-api-access-sv2l7\") pod \"6f586a4e-5100-444f-8e11-0d6f785eac00\" (UID: \"6f586a4e-5100-444f-8e11-0d6f785eac00\") " Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.198068 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f586a4e-5100-444f-8e11-0d6f785eac00-kube-api-access-sv2l7" (OuterVolumeSpecName: "kube-api-access-sv2l7") pod "6f586a4e-5100-444f-8e11-0d6f785eac00" (UID: "6f586a4e-5100-444f-8e11-0d6f785eac00"). InnerVolumeSpecName "kube-api-access-sv2l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.289706 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv2l7\" (UniqueName: \"kubernetes.io/projected/6f586a4e-5100-444f-8e11-0d6f785eac00-kube-api-access-sv2l7\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.765994 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tt62z" event={"ID":"2af934a2-6680-4932-b3af-5f8bdee6c740","Type":"ContainerStarted","Data":"f437daf445daf36ba8d71886d37b62c64fda8f52e87ded71a0b8ee7c5686ef45"} Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.766343 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tt62z" event={"ID":"2af934a2-6680-4932-b3af-5f8bdee6c740","Type":"ContainerStarted","Data":"893ba68971cfe0be4a5528783af19d8cc6a75cc64abc3578fb0b4fc385d62e00"} Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.768327 4820 generic.go:334] "Generic (PLEG): container finished" podID="6f586a4e-5100-444f-8e11-0d6f785eac00" containerID="f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329" exitCode=0 Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.768359 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zxv8h" event={"ID":"6f586a4e-5100-444f-8e11-0d6f785eac00","Type":"ContainerDied","Data":"f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329"} Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.768385 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zxv8h" event={"ID":"6f586a4e-5100-444f-8e11-0d6f785eac00","Type":"ContainerDied","Data":"42a38d966e205222202a7707ec4ffc33aa4152d744c2acd52a8c24e5ab40141f"} Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.768404 4820 scope.go:117] "RemoveContainer" containerID="f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.768547 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zxv8h" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.785926 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tt62z" podStartSLOduration=2.4085479850000002 podStartE2EDuration="2.785899369s" podCreationTimestamp="2026-02-21 07:01:28 +0000 UTC" firstStartedPulling="2026-02-21 07:01:29.779517733 +0000 UTC m=+864.812601931" lastFinishedPulling="2026-02-21 07:01:30.156869107 +0000 UTC m=+865.189953315" observedRunningTime="2026-02-21 07:01:30.783715619 +0000 UTC m=+865.816799827" watchObservedRunningTime="2026-02-21 07:01:30.785899369 +0000 UTC m=+865.818983577" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.790797 4820 scope.go:117] "RemoveContainer" containerID="f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329" Feb 21 07:01:30 crc kubenswrapper[4820]: E0221 07:01:30.791569 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329\": container with ID starting with f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329 not found: ID does not exist" containerID="f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.791618 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329"} err="failed to get container status \"f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329\": rpc error: code = NotFound desc = could not find container \"f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329\": container with ID starting with f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329 not found: ID does not exist" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.809738 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zxv8h"] Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.813591 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-zxv8h"] Feb 21 07:01:31 crc kubenswrapper[4820]: I0221 07:01:31.705530 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f586a4e-5100-444f-8e11-0d6f785eac00" path="/var/lib/kubelet/pods/6f586a4e-5100-444f-8e11-0d6f785eac00/volumes" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.214639 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nspq6"] Feb 21 07:01:34 crc kubenswrapper[4820]: E0221 07:01:34.215130 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f586a4e-5100-444f-8e11-0d6f785eac00" containerName="registry-server" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.215144 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f586a4e-5100-444f-8e11-0d6f785eac00" containerName="registry-server" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.215283 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f586a4e-5100-444f-8e11-0d6f785eac00" containerName="registry-server" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.216012 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.227996 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nspq6"] Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.243555 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-utilities\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.243644 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-catalog-content\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.243735 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2jd\" (UniqueName: \"kubernetes.io/projected/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-kube-api-access-dq2jd\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.344899 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-catalog-content\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.345007 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq2jd\" (UniqueName: \"kubernetes.io/projected/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-kube-api-access-dq2jd\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.345062 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-utilities\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.345522 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-catalog-content\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.345619 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-utilities\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.372635 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq2jd\" (UniqueName: \"kubernetes.io/projected/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-kube-api-access-dq2jd\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.541277 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:35 crc kubenswrapper[4820]: I0221 07:01:35.033137 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nspq6"] Feb 21 07:01:35 crc kubenswrapper[4820]: W0221 07:01:35.037431 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bf0f82c_d0f3_4315_88f5_73b4a99cf1d0.slice/crio-2c780379966ed60d8da8ce6d2baf9f907b06b0c2d6eeaaf960faf7ec025b1c3b WatchSource:0}: Error finding container 2c780379966ed60d8da8ce6d2baf9f907b06b0c2d6eeaaf960faf7ec025b1c3b: Status 404 returned error can't find the container with id 2c780379966ed60d8da8ce6d2baf9f907b06b0c2d6eeaaf960faf7ec025b1c3b Feb 21 07:01:35 crc kubenswrapper[4820]: I0221 07:01:35.801763 4820 generic.go:334] "Generic (PLEG): container finished" podID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerID="bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07" exitCode=0 Feb 21 07:01:35 crc kubenswrapper[4820]: I0221 07:01:35.801890 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nspq6" event={"ID":"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0","Type":"ContainerDied","Data":"bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07"} Feb 21 07:01:35 crc kubenswrapper[4820]: I0221 07:01:35.802133 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nspq6" event={"ID":"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0","Type":"ContainerStarted","Data":"2c780379966ed60d8da8ce6d2baf9f907b06b0c2d6eeaaf960faf7ec025b1c3b"} Feb 21 07:01:36 crc kubenswrapper[4820]: I0221 07:01:36.809642 4820 generic.go:334] "Generic (PLEG): container finished" podID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerID="537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc" exitCode=0 Feb 21 07:01:36 crc kubenswrapper[4820]: I0221 07:01:36.809696 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nspq6" event={"ID":"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0","Type":"ContainerDied","Data":"537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc"} Feb 21 07:01:37 crc kubenswrapper[4820]: I0221 07:01:37.819645 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nspq6" event={"ID":"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0","Type":"ContainerStarted","Data":"66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a"} Feb 21 07:01:37 crc kubenswrapper[4820]: I0221 07:01:37.841470 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nspq6" podStartSLOduration=2.4362492270000002 podStartE2EDuration="3.841437733s" podCreationTimestamp="2026-02-21 07:01:34 +0000 UTC" firstStartedPulling="2026-02-21 07:01:35.802981589 +0000 UTC m=+870.836065787" lastFinishedPulling="2026-02-21 07:01:37.208170095 +0000 UTC m=+872.241254293" observedRunningTime="2026-02-21 07:01:37.835202451 +0000 UTC m=+872.868286649" watchObservedRunningTime="2026-02-21 07:01:37.841437733 +0000 UTC m=+872.874521971" Feb 21 07:01:39 crc kubenswrapper[4820]: I0221 07:01:39.390593 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:39 crc kubenswrapper[4820]: I0221 07:01:39.392098 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:39 crc kubenswrapper[4820]: I0221 07:01:39.416721 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:39 crc kubenswrapper[4820]: I0221 07:01:39.862942 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:44 crc kubenswrapper[4820]: I0221 07:01:44.541751 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:44 crc kubenswrapper[4820]: I0221 07:01:44.542294 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:44 crc kubenswrapper[4820]: I0221 07:01:44.580214 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:44 crc kubenswrapper[4820]: I0221 07:01:44.907941 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:45 crc kubenswrapper[4820]: I0221 07:01:45.803786 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nspq6"] Feb 21 07:01:46 crc kubenswrapper[4820]: I0221 07:01:46.875822 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nspq6" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="registry-server" containerID="cri-o://66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a" gracePeriod=2 Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.054775 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff"] Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.056220 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.060844 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-sqz9z" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.064021 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff"] Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.217284 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.217353 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjkhn\" (UniqueName: \"kubernetes.io/projected/47790790-d956-41e0-8868-9fb9fecfefe7-kube-api-access-jjkhn\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.217380 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.263568 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.318387 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjkhn\" (UniqueName: \"kubernetes.io/projected/47790790-d956-41e0-8868-9fb9fecfefe7-kube-api-access-jjkhn\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.318477 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.318601 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.319103 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.320025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.334333 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjkhn\" (UniqueName: \"kubernetes.io/projected/47790790-d956-41e0-8868-9fb9fecfefe7-kube-api-access-jjkhn\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.377898 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.420061 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-catalog-content\") pod \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.420201 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq2jd\" (UniqueName: \"kubernetes.io/projected/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-kube-api-access-dq2jd\") pod \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.420261 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-utilities\") pod \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.421509 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-utilities" (OuterVolumeSpecName: "utilities") pod "6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" (UID: "6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.429887 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-kube-api-access-dq2jd" (OuterVolumeSpecName: "kube-api-access-dq2jd") pod "6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" (UID: "6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0"). InnerVolumeSpecName "kube-api-access-dq2jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.463622 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" (UID: "6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.521878 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.521913 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.521924 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq2jd\" (UniqueName: \"kubernetes.io/projected/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-kube-api-access-dq2jd\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.578087 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff"] Feb 21 07:01:47 crc kubenswrapper[4820]: W0221 07:01:47.582601 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47790790_d956_41e0_8868_9fb9fecfefe7.slice/crio-2c2e62b3dd9057de4fc4fef98d4cbc804e38d9b07f322e01a3120a3d2b8997cd WatchSource:0}: Error finding container 2c2e62b3dd9057de4fc4fef98d4cbc804e38d9b07f322e01a3120a3d2b8997cd: Status 404 returned error can't find the container with id 2c2e62b3dd9057de4fc4fef98d4cbc804e38d9b07f322e01a3120a3d2b8997cd Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.882839 4820 generic.go:334] "Generic (PLEG): container finished" podID="47790790-d956-41e0-8868-9fb9fecfefe7" containerID="01038c847cf937170245344a33fbdc7f19bf2b88135e96ab6708ce05e2281dbb" exitCode=0 Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.882938 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" event={"ID":"47790790-d956-41e0-8868-9fb9fecfefe7","Type":"ContainerDied","Data":"01038c847cf937170245344a33fbdc7f19bf2b88135e96ab6708ce05e2281dbb"} Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.883116 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" event={"ID":"47790790-d956-41e0-8868-9fb9fecfefe7","Type":"ContainerStarted","Data":"2c2e62b3dd9057de4fc4fef98d4cbc804e38d9b07f322e01a3120a3d2b8997cd"} Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.885427 4820 generic.go:334] "Generic (PLEG): container finished" podID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerID="66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a" exitCode=0 Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.885477 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nspq6" event={"ID":"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0","Type":"ContainerDied","Data":"66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a"} Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.885519 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.885527 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nspq6" event={"ID":"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0","Type":"ContainerDied","Data":"2c780379966ed60d8da8ce6d2baf9f907b06b0c2d6eeaaf960faf7ec025b1c3b"} Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.885554 4820 scope.go:117] "RemoveContainer" containerID="66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.899510 4820 scope.go:117] "RemoveContainer" containerID="537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.917116 4820 scope.go:117] "RemoveContainer" containerID="bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.923189 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nspq6"] Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.931347 4820 scope.go:117] "RemoveContainer" containerID="66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a" Feb 21 07:01:47 crc kubenswrapper[4820]: E0221 07:01:47.931721 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a\": container with ID starting with 66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a not found: ID does not exist" containerID="66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.931777 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a"} err="failed to get container status \"66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a\": rpc error: code = NotFound desc = could not find container \"66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a\": container with ID starting with 66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a not found: ID does not exist" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.931828 4820 scope.go:117] "RemoveContainer" containerID="537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc" Feb 21 07:01:47 crc kubenswrapper[4820]: E0221 07:01:47.932217 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc\": container with ID starting with 537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc not found: ID does not exist" containerID="537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.932260 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc"} err="failed to get container status \"537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc\": rpc error: code = NotFound desc = could not find container \"537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc\": container with ID starting with 537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc not found: ID does not exist" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.932288 4820 scope.go:117] "RemoveContainer" containerID="bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07" Feb 21 07:01:47 crc kubenswrapper[4820]: E0221 07:01:47.932834 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07\": container with ID starting with bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07 not found: ID does not exist" containerID="bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.932879 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07"} err="failed to get container status \"bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07\": rpc error: code = NotFound desc = could not find container \"bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07\": container with ID starting with bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07 not found: ID does not exist" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.936799 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nspq6"] Feb 21 07:01:48 crc kubenswrapper[4820]: I0221 07:01:48.892502 4820 generic.go:334] "Generic (PLEG): container finished" podID="47790790-d956-41e0-8868-9fb9fecfefe7" containerID="f2b97b10fc8f4e954b58ebb3cf8ae33b16d546186f43708f48e7e2637503ba8d" exitCode=0 Feb 21 07:01:48 crc kubenswrapper[4820]: I0221 07:01:48.892595 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" event={"ID":"47790790-d956-41e0-8868-9fb9fecfefe7","Type":"ContainerDied","Data":"f2b97b10fc8f4e954b58ebb3cf8ae33b16d546186f43708f48e7e2637503ba8d"} Feb 21 07:01:49 crc kubenswrapper[4820]: I0221 07:01:49.704982 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" path="/var/lib/kubelet/pods/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0/volumes" Feb 21 07:01:49 crc kubenswrapper[4820]: I0221 07:01:49.905618 4820 generic.go:334] "Generic (PLEG): container finished" podID="47790790-d956-41e0-8868-9fb9fecfefe7" containerID="bc6abeea8560611797e5ebf5c141aebde43471dd6a3e04302b8f6a5d7eab4b7e" exitCode=0 Feb 21 07:01:49 crc kubenswrapper[4820]: I0221 07:01:49.905986 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" event={"ID":"47790790-d956-41e0-8868-9fb9fecfefe7","Type":"ContainerDied","Data":"bc6abeea8560611797e5ebf5c141aebde43471dd6a3e04302b8f6a5d7eab4b7e"} Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.185444 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.269104 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjkhn\" (UniqueName: \"kubernetes.io/projected/47790790-d956-41e0-8868-9fb9fecfefe7-kube-api-access-jjkhn\") pod \"47790790-d956-41e0-8868-9fb9fecfefe7\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.269194 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-util\") pod \"47790790-d956-41e0-8868-9fb9fecfefe7\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.269305 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-bundle\") pod \"47790790-d956-41e0-8868-9fb9fecfefe7\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.270086 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-bundle" (OuterVolumeSpecName: "bundle") pod "47790790-d956-41e0-8868-9fb9fecfefe7" (UID: "47790790-d956-41e0-8868-9fb9fecfefe7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.278079 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47790790-d956-41e0-8868-9fb9fecfefe7-kube-api-access-jjkhn" (OuterVolumeSpecName: "kube-api-access-jjkhn") pod "47790790-d956-41e0-8868-9fb9fecfefe7" (UID: "47790790-d956-41e0-8868-9fb9fecfefe7"). InnerVolumeSpecName "kube-api-access-jjkhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.290081 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-util" (OuterVolumeSpecName: "util") pod "47790790-d956-41e0-8868-9fb9fecfefe7" (UID: "47790790-d956-41e0-8868-9fb9fecfefe7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.370878 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjkhn\" (UniqueName: \"kubernetes.io/projected/47790790-d956-41e0-8868-9fb9fecfefe7-kube-api-access-jjkhn\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.370949 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-util\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.370968 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.925488 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" event={"ID":"47790790-d956-41e0-8868-9fb9fecfefe7","Type":"ContainerDied","Data":"2c2e62b3dd9057de4fc4fef98d4cbc804e38d9b07f322e01a3120a3d2b8997cd"} Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.925896 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c2e62b3dd9057de4fc4fef98d4cbc804e38d9b07f322e01a3120a3d2b8997cd" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.925617 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.206963 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-llbg7"] Feb 21 07:01:55 crc kubenswrapper[4820]: E0221 07:01:55.207188 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47790790-d956-41e0-8868-9fb9fecfefe7" containerName="extract" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207198 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="47790790-d956-41e0-8868-9fb9fecfefe7" containerName="extract" Feb 21 07:01:55 crc kubenswrapper[4820]: E0221 07:01:55.207211 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47790790-d956-41e0-8868-9fb9fecfefe7" containerName="util" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207217 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="47790790-d956-41e0-8868-9fb9fecfefe7" containerName="util" Feb 21 07:01:55 crc kubenswrapper[4820]: E0221 07:01:55.207229 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="registry-server" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207255 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="registry-server" Feb 21 07:01:55 crc kubenswrapper[4820]: E0221 07:01:55.207263 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="extract-content" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207268 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="extract-content" Feb 21 07:01:55 crc kubenswrapper[4820]: E0221 07:01:55.207277 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47790790-d956-41e0-8868-9fb9fecfefe7" containerName="pull" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207283 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="47790790-d956-41e0-8868-9fb9fecfefe7" containerName="pull" Feb 21 07:01:55 crc kubenswrapper[4820]: E0221 07:01:55.207292 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="extract-utilities" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207298 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="extract-utilities" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207408 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="registry-server" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207427 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="47790790-d956-41e0-8868-9fb9fecfefe7" containerName="extract" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.208693 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.218201 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llbg7"] Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.325167 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-catalog-content\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.325253 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-utilities\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.325289 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgr9l\" (UniqueName: \"kubernetes.io/projected/fe66c064-e1f1-4efe-b7a8-4aeae6504817-kube-api-access-vgr9l\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.426814 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgr9l\" (UniqueName: \"kubernetes.io/projected/fe66c064-e1f1-4efe-b7a8-4aeae6504817-kube-api-access-vgr9l\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.427134 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-catalog-content\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.427255 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-utilities\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.427520 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-catalog-content\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.427722 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-utilities\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.444848 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgr9l\" (UniqueName: \"kubernetes.io/projected/fe66c064-e1f1-4efe-b7a8-4aeae6504817-kube-api-access-vgr9l\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.523461 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.961996 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llbg7"] Feb 21 07:01:56 crc kubenswrapper[4820]: I0221 07:01:56.958806 4820 generic.go:334] "Generic (PLEG): container finished" podID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerID="440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f" exitCode=0 Feb 21 07:01:56 crc kubenswrapper[4820]: I0221 07:01:56.958973 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbg7" event={"ID":"fe66c064-e1f1-4efe-b7a8-4aeae6504817","Type":"ContainerDied","Data":"440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f"} Feb 21 07:01:56 crc kubenswrapper[4820]: I0221 07:01:56.960189 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbg7" event={"ID":"fe66c064-e1f1-4efe-b7a8-4aeae6504817","Type":"ContainerStarted","Data":"952313b28cd6f53641c8f8ee679f85d42b6fa78ed8b197b1cdd78e7ea0cde5a8"} Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.459493 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk"] Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.460335 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.462305 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-bc4xv" Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.497578 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk"] Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.554278 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxlq7\" (UniqueName: \"kubernetes.io/projected/b7cb4a9f-82fd-41b1-8175-351de45fde99-kube-api-access-rxlq7\") pod \"openstack-operator-controller-init-6679bf9b57-l85mk\" (UID: \"b7cb4a9f-82fd-41b1-8175-351de45fde99\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.655674 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxlq7\" (UniqueName: \"kubernetes.io/projected/b7cb4a9f-82fd-41b1-8175-351de45fde99-kube-api-access-rxlq7\") pod \"openstack-operator-controller-init-6679bf9b57-l85mk\" (UID: \"b7cb4a9f-82fd-41b1-8175-351de45fde99\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.674754 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxlq7\" (UniqueName: \"kubernetes.io/projected/b7cb4a9f-82fd-41b1-8175-351de45fde99-kube-api-access-rxlq7\") pod \"openstack-operator-controller-init-6679bf9b57-l85mk\" (UID: \"b7cb4a9f-82fd-41b1-8175-351de45fde99\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.778265 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.971108 4820 generic.go:334] "Generic (PLEG): container finished" podID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerID="07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f" exitCode=0 Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.973285 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbg7" event={"ID":"fe66c064-e1f1-4efe-b7a8-4aeae6504817","Type":"ContainerDied","Data":"07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f"} Feb 21 07:01:58 crc kubenswrapper[4820]: I0221 07:01:58.186600 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk"] Feb 21 07:01:58 crc kubenswrapper[4820]: I0221 07:01:58.980520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbg7" event={"ID":"fe66c064-e1f1-4efe-b7a8-4aeae6504817","Type":"ContainerStarted","Data":"f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6"} Feb 21 07:01:58 crc kubenswrapper[4820]: I0221 07:01:58.991642 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" event={"ID":"b7cb4a9f-82fd-41b1-8175-351de45fde99","Type":"ContainerStarted","Data":"d6e5a2caaaec05c5508bf1e4085d61c0aa16ce4cbb0d788cbef3000cfbb5913a"} Feb 21 07:01:58 crc kubenswrapper[4820]: I0221 07:01:58.999091 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-llbg7" podStartSLOduration=2.575517157 podStartE2EDuration="3.999076157s" podCreationTimestamp="2026-02-21 07:01:55 +0000 UTC" firstStartedPulling="2026-02-21 07:01:56.960730406 +0000 UTC m=+891.993814604" lastFinishedPulling="2026-02-21 07:01:58.384289406 +0000 UTC m=+893.417373604" observedRunningTime="2026-02-21 07:01:58.996016402 +0000 UTC m=+894.029100590" watchObservedRunningTime="2026-02-21 07:01:58.999076157 +0000 UTC m=+894.032160355" Feb 21 07:02:03 crc kubenswrapper[4820]: I0221 07:02:03.018470 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" event={"ID":"b7cb4a9f-82fd-41b1-8175-351de45fde99","Type":"ContainerStarted","Data":"90e413e1e124987e3863df0ab0b6743273f75184bfffb82952c4e6841aee29ba"} Feb 21 07:02:03 crc kubenswrapper[4820]: I0221 07:02:03.019061 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" Feb 21 07:02:03 crc kubenswrapper[4820]: I0221 07:02:03.060281 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" podStartSLOduration=2.126359579 podStartE2EDuration="6.060245802s" podCreationTimestamp="2026-02-21 07:01:57 +0000 UTC" firstStartedPulling="2026-02-21 07:01:58.195177732 +0000 UTC m=+893.228261940" lastFinishedPulling="2026-02-21 07:02:02.129063925 +0000 UTC m=+897.162148163" observedRunningTime="2026-02-21 07:02:03.057826156 +0000 UTC m=+898.090910364" watchObservedRunningTime="2026-02-21 07:02:03.060245802 +0000 UTC m=+898.093330000" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.308303 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4zcpg"] Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.309666 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.336304 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4zcpg"] Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.377771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-utilities\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.377834 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-catalog-content\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.377999 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tgdj\" (UniqueName: \"kubernetes.io/projected/1e07490b-0050-483c-8e03-bb915735b22a-kube-api-access-7tgdj\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.478879 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tgdj\" (UniqueName: \"kubernetes.io/projected/1e07490b-0050-483c-8e03-bb915735b22a-kube-api-access-7tgdj\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.479295 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-utilities\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.479327 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-catalog-content\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.479753 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-catalog-content\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.479773 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-utilities\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.496903 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tgdj\" (UniqueName: \"kubernetes.io/projected/1e07490b-0050-483c-8e03-bb915735b22a-kube-api-access-7tgdj\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.524319 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.524375 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.562221 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.634350 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.985980 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4zcpg"] Feb 21 07:02:06 crc kubenswrapper[4820]: I0221 07:02:06.067049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zcpg" event={"ID":"1e07490b-0050-483c-8e03-bb915735b22a","Type":"ContainerStarted","Data":"69079c7c5acb5ab328f11ec4cd84c2f068ef88fc99cfbcdd5b34b69d1ea69418"} Feb 21 07:02:06 crc kubenswrapper[4820]: I0221 07:02:06.116596 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:02:07 crc kubenswrapper[4820]: I0221 07:02:07.077279 4820 generic.go:334] "Generic (PLEG): container finished" podID="1e07490b-0050-483c-8e03-bb915735b22a" containerID="ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6" exitCode=0 Feb 21 07:02:07 crc kubenswrapper[4820]: I0221 07:02:07.077398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zcpg" event={"ID":"1e07490b-0050-483c-8e03-bb915735b22a","Type":"ContainerDied","Data":"ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6"} Feb 21 07:02:07 crc kubenswrapper[4820]: I0221 07:02:07.784096 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" Feb 21 07:02:07 crc kubenswrapper[4820]: I0221 07:02:07.889329 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-llbg7"] Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.086588 4820 generic.go:334] "Generic (PLEG): container finished" podID="1e07490b-0050-483c-8e03-bb915735b22a" containerID="87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24" exitCode=0 Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.086791 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-llbg7" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="registry-server" containerID="cri-o://f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6" gracePeriod=2 Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.087606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zcpg" event={"ID":"1e07490b-0050-483c-8e03-bb915735b22a","Type":"ContainerDied","Data":"87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24"} Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.493720 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.645359 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgr9l\" (UniqueName: \"kubernetes.io/projected/fe66c064-e1f1-4efe-b7a8-4aeae6504817-kube-api-access-vgr9l\") pod \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.645435 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-utilities\") pod \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.645470 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-catalog-content\") pod \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.646648 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-utilities" (OuterVolumeSpecName: "utilities") pod "fe66c064-e1f1-4efe-b7a8-4aeae6504817" (UID: "fe66c064-e1f1-4efe-b7a8-4aeae6504817"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.651473 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe66c064-e1f1-4efe-b7a8-4aeae6504817-kube-api-access-vgr9l" (OuterVolumeSpecName: "kube-api-access-vgr9l") pod "fe66c064-e1f1-4efe-b7a8-4aeae6504817" (UID: "fe66c064-e1f1-4efe-b7a8-4aeae6504817"). InnerVolumeSpecName "kube-api-access-vgr9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.747353 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.747399 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgr9l\" (UniqueName: \"kubernetes.io/projected/fe66c064-e1f1-4efe-b7a8-4aeae6504817-kube-api-access-vgr9l\") on node \"crc\" DevicePath \"\"" Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.895064 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe66c064-e1f1-4efe-b7a8-4aeae6504817" (UID: "fe66c064-e1f1-4efe-b7a8-4aeae6504817"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.950300 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.095484 4820 generic.go:334] "Generic (PLEG): container finished" podID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerID="f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6" exitCode=0 Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.095543 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.095563 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbg7" event={"ID":"fe66c064-e1f1-4efe-b7a8-4aeae6504817","Type":"ContainerDied","Data":"f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6"} Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.095938 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbg7" event={"ID":"fe66c064-e1f1-4efe-b7a8-4aeae6504817","Type":"ContainerDied","Data":"952313b28cd6f53641c8f8ee679f85d42b6fa78ed8b197b1cdd78e7ea0cde5a8"} Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.095974 4820 scope.go:117] "RemoveContainer" containerID="f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.099287 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zcpg" event={"ID":"1e07490b-0050-483c-8e03-bb915735b22a","Type":"ContainerStarted","Data":"f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a"} Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.109664 4820 scope.go:117] "RemoveContainer" containerID="07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.120961 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4zcpg" podStartSLOduration=2.69308084 podStartE2EDuration="4.120946037s" podCreationTimestamp="2026-02-21 07:02:05 +0000 UTC" firstStartedPulling="2026-02-21 07:02:07.07943379 +0000 UTC m=+902.112517998" lastFinishedPulling="2026-02-21 07:02:08.507298997 +0000 UTC m=+903.540383195" observedRunningTime="2026-02-21 07:02:09.118808838 +0000 UTC m=+904.151893046" watchObservedRunningTime="2026-02-21 07:02:09.120946037 +0000 UTC m=+904.154030235" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.135658 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-llbg7"] Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.139475 4820 scope.go:117] "RemoveContainer" containerID="440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.147420 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-llbg7"] Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.153558 4820 scope.go:117] "RemoveContainer" containerID="f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6" Feb 21 07:02:09 crc kubenswrapper[4820]: E0221 07:02:09.153965 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6\": container with ID starting with f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6 not found: ID does not exist" containerID="f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.154024 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6"} err="failed to get container status \"f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6\": rpc error: code = NotFound desc = could not find container \"f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6\": container with ID starting with f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6 not found: ID does not exist" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.154061 4820 scope.go:117] "RemoveContainer" containerID="07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f" Feb 21 07:02:09 crc kubenswrapper[4820]: E0221 07:02:09.154751 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f\": container with ID starting with 07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f not found: ID does not exist" containerID="07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.154782 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f"} err="failed to get container status \"07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f\": rpc error: code = NotFound desc = could not find container \"07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f\": container with ID starting with 07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f not found: ID does not exist" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.154805 4820 scope.go:117] "RemoveContainer" containerID="440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f" Feb 21 07:02:09 crc kubenswrapper[4820]: E0221 07:02:09.155083 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f\": container with ID starting with 440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f not found: ID does not exist" containerID="440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.155114 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f"} err="failed to get container status \"440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f\": rpc error: code = NotFound desc = could not find container \"440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f\": container with ID starting with 440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f not found: ID does not exist" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.705356 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" path="/var/lib/kubelet/pods/fe66c064-e1f1-4efe-b7a8-4aeae6504817/volumes" Feb 21 07:02:15 crc kubenswrapper[4820]: I0221 07:02:15.635211 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:15 crc kubenswrapper[4820]: I0221 07:02:15.636582 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:15 crc kubenswrapper[4820]: I0221 07:02:15.673822 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:16 crc kubenswrapper[4820]: I0221 07:02:16.185980 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:16 crc kubenswrapper[4820]: I0221 07:02:16.221416 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4zcpg"] Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.158136 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4zcpg" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="registry-server" containerID="cri-o://f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a" gracePeriod=2 Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.541964 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.677916 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-utilities\") pod \"1e07490b-0050-483c-8e03-bb915735b22a\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.678013 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-catalog-content\") pod \"1e07490b-0050-483c-8e03-bb915735b22a\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.678072 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tgdj\" (UniqueName: \"kubernetes.io/projected/1e07490b-0050-483c-8e03-bb915735b22a-kube-api-access-7tgdj\") pod \"1e07490b-0050-483c-8e03-bb915735b22a\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.678716 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-utilities" (OuterVolumeSpecName: "utilities") pod "1e07490b-0050-483c-8e03-bb915735b22a" (UID: "1e07490b-0050-483c-8e03-bb915735b22a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.686977 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e07490b-0050-483c-8e03-bb915735b22a-kube-api-access-7tgdj" (OuterVolumeSpecName: "kube-api-access-7tgdj") pod "1e07490b-0050-483c-8e03-bb915735b22a" (UID: "1e07490b-0050-483c-8e03-bb915735b22a"). InnerVolumeSpecName "kube-api-access-7tgdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.731623 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e07490b-0050-483c-8e03-bb915735b22a" (UID: "1e07490b-0050-483c-8e03-bb915735b22a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.780031 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.780157 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.780401 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tgdj\" (UniqueName: \"kubernetes.io/projected/1e07490b-0050-483c-8e03-bb915735b22a-kube-api-access-7tgdj\") on node \"crc\" DevicePath \"\"" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.167535 4820 generic.go:334] "Generic (PLEG): container finished" podID="1e07490b-0050-483c-8e03-bb915735b22a" containerID="f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a" exitCode=0 Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.167587 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zcpg" event={"ID":"1e07490b-0050-483c-8e03-bb915735b22a","Type":"ContainerDied","Data":"f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a"} Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.167600 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.167617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zcpg" event={"ID":"1e07490b-0050-483c-8e03-bb915735b22a","Type":"ContainerDied","Data":"69079c7c5acb5ab328f11ec4cd84c2f068ef88fc99cfbcdd5b34b69d1ea69418"} Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.167643 4820 scope.go:117] "RemoveContainer" containerID="f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.194312 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4zcpg"] Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.197991 4820 scope.go:117] "RemoveContainer" containerID="87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.198931 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4zcpg"] Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.214647 4820 scope.go:117] "RemoveContainer" containerID="ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.231738 4820 scope.go:117] "RemoveContainer" containerID="f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a" Feb 21 07:02:19 crc kubenswrapper[4820]: E0221 07:02:19.232137 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a\": container with ID starting with f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a not found: ID does not exist" containerID="f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.232200 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a"} err="failed to get container status \"f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a\": rpc error: code = NotFound desc = could not find container \"f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a\": container with ID starting with f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a not found: ID does not exist" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.232262 4820 scope.go:117] "RemoveContainer" containerID="87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24" Feb 21 07:02:19 crc kubenswrapper[4820]: E0221 07:02:19.232565 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24\": container with ID starting with 87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24 not found: ID does not exist" containerID="87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.232596 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24"} err="failed to get container status \"87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24\": rpc error: code = NotFound desc = could not find container \"87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24\": container with ID starting with 87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24 not found: ID does not exist" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.232618 4820 scope.go:117] "RemoveContainer" containerID="ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6" Feb 21 07:02:19 crc kubenswrapper[4820]: E0221 07:02:19.232879 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6\": container with ID starting with ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6 not found: ID does not exist" containerID="ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.232928 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6"} err="failed to get container status \"ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6\": rpc error: code = NotFound desc = could not find container \"ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6\": container with ID starting with ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6 not found: ID does not exist" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.703270 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e07490b-0050-483c-8e03-bb915735b22a" path="/var/lib/kubelet/pods/1e07490b-0050-483c-8e03-bb915735b22a/volumes" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.628975 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm"] Feb 21 07:02:47 crc kubenswrapper[4820]: E0221 07:02:47.629762 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="registry-server" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.629776 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="registry-server" Feb 21 07:02:47 crc kubenswrapper[4820]: E0221 07:02:47.629791 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="extract-content" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.629801 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="extract-content" Feb 21 07:02:47 crc kubenswrapper[4820]: E0221 07:02:47.629814 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="extract-utilities" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.629823 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="extract-utilities" Feb 21 07:02:47 crc kubenswrapper[4820]: E0221 07:02:47.629841 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="extract-content" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.629849 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="extract-content" Feb 21 07:02:47 crc kubenswrapper[4820]: E0221 07:02:47.629866 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="registry-server" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.629874 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="registry-server" Feb 21 07:02:47 crc kubenswrapper[4820]: E0221 07:02:47.629886 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="extract-utilities" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.629894 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="extract-utilities" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.630045 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="registry-server" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.630060 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="registry-server" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.630514 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.635832 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-7wlqx" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.637061 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.638172 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.645209 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.650582 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.655208 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.656304 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.657984 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-n425l" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.658396 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jc8vd" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.692569 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.719011 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.719776 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.723099 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tk9hs" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.744725 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.749834 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bnq6\" (UniqueName: \"kubernetes.io/projected/f8b2e5d3-e795-4971-92d9-f0d8f6586fa8-kube-api-access-8bnq6\") pod \"designate-operator-controller-manager-6d8bf5c495-7fq9h\" (UID: \"f8b2e5d3-e795-4971-92d9-f0d8f6586fa8\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.749962 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwl42\" (UniqueName: \"kubernetes.io/projected/3c9c6322-ba57-47b3-a079-ab86a6660c45-kube-api-access-gwl42\") pod \"cinder-operator-controller-manager-5d946d989d-lx4sd\" (UID: \"3c9c6322-ba57-47b3-a079-ab86a6660c45\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.749985 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kszvx\" (UniqueName: \"kubernetes.io/projected/76209e29-400d-4677-85b5-89c5f4e9323a-kube-api-access-kszvx\") pod \"barbican-operator-controller-manager-868647ff47-lrgjm\" (UID: \"76209e29-400d-4677-85b5-89c5f4e9323a\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.755112 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.770652 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-t5npw" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.796361 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.802040 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.808019 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.813604 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.814331 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.814818 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.818153 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4xx9d" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.818376 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.818531 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-smzkg" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.833957 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.834987 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.840391 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.841253 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.843529 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-h92lv" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.851665 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.853700 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr7c9\" (UniqueName: \"kubernetes.io/projected/f8cd79d8-6ba2-467c-95b5-4d965d73ed75-kube-api-access-qr7c9\") pod \"glance-operator-controller-manager-77987464f4-gbtvh\" (UID: \"f8cd79d8-6ba2-467c-95b5-4d965d73ed75\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.853743 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dsgn\" (UniqueName: \"kubernetes.io/projected/a4f64d1a-4768-48e1-8a88-fbf906956528-kube-api-access-9dsgn\") pod \"heat-operator-controller-manager-69f49c598c-tlx7z\" (UID: \"a4f64d1a-4768-48e1-8a88-fbf906956528\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.853797 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bnq6\" (UniqueName: \"kubernetes.io/projected/f8b2e5d3-e795-4971-92d9-f0d8f6586fa8-kube-api-access-8bnq6\") pod \"designate-operator-controller-manager-6d8bf5c495-7fq9h\" (UID: \"f8b2e5d3-e795-4971-92d9-f0d8f6586fa8\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.853844 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwl42\" (UniqueName: \"kubernetes.io/projected/3c9c6322-ba57-47b3-a079-ab86a6660c45-kube-api-access-gwl42\") pod \"cinder-operator-controller-manager-5d946d989d-lx4sd\" (UID: \"3c9c6322-ba57-47b3-a079-ab86a6660c45\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.853861 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kszvx\" (UniqueName: \"kubernetes.io/projected/76209e29-400d-4677-85b5-89c5f4e9323a-kube-api-access-kszvx\") pod \"barbican-operator-controller-manager-868647ff47-lrgjm\" (UID: \"76209e29-400d-4677-85b5-89c5f4e9323a\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.862320 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.863083 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.869796 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.870886 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.871510 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-bdntt" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.875642 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-z7nvp" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.890973 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bnq6\" (UniqueName: \"kubernetes.io/projected/f8b2e5d3-e795-4971-92d9-f0d8f6586fa8-kube-api-access-8bnq6\") pod \"designate-operator-controller-manager-6d8bf5c495-7fq9h\" (UID: \"f8b2e5d3-e795-4971-92d9-f0d8f6586fa8\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.899666 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kszvx\" (UniqueName: \"kubernetes.io/projected/76209e29-400d-4677-85b5-89c5f4e9323a-kube-api-access-kszvx\") pod \"barbican-operator-controller-manager-868647ff47-lrgjm\" (UID: \"76209e29-400d-4677-85b5-89c5f4e9323a\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.911338 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.912625 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwl42\" (UniqueName: \"kubernetes.io/projected/3c9c6322-ba57-47b3-a079-ab86a6660c45-kube-api-access-gwl42\") pod \"cinder-operator-controller-manager-5d946d989d-lx4sd\" (UID: \"3c9c6322-ba57-47b3-a079-ab86a6660c45\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.922972 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.923785 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.927111 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-js42f" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.935841 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.947855 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.954960 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbbvk\" (UniqueName: \"kubernetes.io/projected/4f343be8-a654-43ac-938a-6b726caab1ad-kube-api-access-dbbvk\") pod \"ironic-operator-controller-manager-554564d7fc-fj4tn\" (UID: \"4f343be8-a654-43ac-938a-6b726caab1ad\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955013 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vctj4\" (UniqueName: \"kubernetes.io/projected/903ed1dc-819c-4ed9-86f6-ca32e4f96792-kube-api-access-vctj4\") pod \"keystone-operator-controller-manager-b4d948c87-lgdx6\" (UID: \"903ed1dc-819c-4ed9-86f6-ca32e4f96792\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955062 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955097 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr7c9\" (UniqueName: \"kubernetes.io/projected/f8cd79d8-6ba2-467c-95b5-4d965d73ed75-kube-api-access-qr7c9\") pod \"glance-operator-controller-manager-77987464f4-gbtvh\" (UID: \"f8cd79d8-6ba2-467c-95b5-4d965d73ed75\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955119 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hds88\" (UniqueName: \"kubernetes.io/projected/047df55d-9730-4215-bbd5-73fd59a0e9f5-kube-api-access-hds88\") pod \"manila-operator-controller-manager-54f6768c69-pbn9f\" (UID: \"047df55d-9730-4215-bbd5-73fd59a0e9f5\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955148 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dsgn\" (UniqueName: \"kubernetes.io/projected/a4f64d1a-4768-48e1-8a88-fbf906956528-kube-api-access-9dsgn\") pod \"heat-operator-controller-manager-69f49c598c-tlx7z\" (UID: \"a4f64d1a-4768-48e1-8a88-fbf906956528\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955176 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcrgj\" (UniqueName: \"kubernetes.io/projected/7ab15a3b-5688-4d42-b99a-e88bb8b11f65-kube-api-access-pcrgj\") pod \"horizon-operator-controller-manager-5b9b8895d5-t6t6b\" (UID: \"7ab15a3b-5688-4d42-b99a-e88bb8b11f65\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955228 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8dqv\" (UniqueName: \"kubernetes.io/projected/2ae82741-a73e-4d45-852f-a206550cb1e9-kube-api-access-d8dqv\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955929 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.964291 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.965072 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.968911 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.977467 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-nz7jt" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.986814 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.998060 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr7c9\" (UniqueName: \"kubernetes.io/projected/f8cd79d8-6ba2-467c-95b5-4d965d73ed75-kube-api-access-qr7c9\") pod \"glance-operator-controller-manager-77987464f4-gbtvh\" (UID: \"f8cd79d8-6ba2-467c-95b5-4d965d73ed75\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.012895 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.014001 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.017815 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-lg84l" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.033677 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.035317 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.036257 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.037822 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dsgn\" (UniqueName: \"kubernetes.io/projected/a4f64d1a-4768-48e1-8a88-fbf906956528-kube-api-access-9dsgn\") pod \"heat-operator-controller-manager-69f49c598c-tlx7z\" (UID: \"a4f64d1a-4768-48e1-8a88-fbf906956528\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.044665 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-8k5rm" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057405 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcrgj\" (UniqueName: \"kubernetes.io/projected/7ab15a3b-5688-4d42-b99a-e88bb8b11f65-kube-api-access-pcrgj\") pod \"horizon-operator-controller-manager-5b9b8895d5-t6t6b\" (UID: \"7ab15a3b-5688-4d42-b99a-e88bb8b11f65\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057478 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8dqv\" (UniqueName: \"kubernetes.io/projected/2ae82741-a73e-4d45-852f-a206550cb1e9-kube-api-access-d8dqv\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057515 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdpbs\" (UniqueName: \"kubernetes.io/projected/b248c78b-0213-4833-8d04-7d2514c2e673-kube-api-access-rdpbs\") pod \"mariadb-operator-controller-manager-6994f66f48-gxpq6\" (UID: \"b248c78b-0213-4833-8d04-7d2514c2e673\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057548 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbbvk\" (UniqueName: \"kubernetes.io/projected/4f343be8-a654-43ac-938a-6b726caab1ad-kube-api-access-dbbvk\") pod \"ironic-operator-controller-manager-554564d7fc-fj4tn\" (UID: \"4f343be8-a654-43ac-938a-6b726caab1ad\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057565 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vctj4\" (UniqueName: \"kubernetes.io/projected/903ed1dc-819c-4ed9-86f6-ca32e4f96792-kube-api-access-vctj4\") pod \"keystone-operator-controller-manager-b4d948c87-lgdx6\" (UID: \"903ed1dc-819c-4ed9-86f6-ca32e4f96792\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057586 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h4l4\" (UniqueName: \"kubernetes.io/projected/9ec17569-aac1-4b58-8efc-b5a483e47a71-kube-api-access-6h4l4\") pod \"neutron-operator-controller-manager-64ddbf8bb-lzhqv\" (UID: \"9ec17569-aac1-4b58-8efc-b5a483e47a71\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057612 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057635 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hds88\" (UniqueName: \"kubernetes.io/projected/047df55d-9730-4215-bbd5-73fd59a0e9f5-kube-api-access-hds88\") pod \"manila-operator-controller-manager-54f6768c69-pbn9f\" (UID: \"047df55d-9730-4215-bbd5-73fd59a0e9f5\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.058093 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.058136 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert podName:2ae82741-a73e-4d45-852f-a206550cb1e9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:48.558122314 +0000 UTC m=+943.591206512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert") pod "infra-operator-controller-manager-79d975b745-qvl8t" (UID: "2ae82741-a73e-4d45-852f-a206550cb1e9") : secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.058314 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.070778 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.101901 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbbvk\" (UniqueName: \"kubernetes.io/projected/4f343be8-a654-43ac-938a-6b726caab1ad-kube-api-access-dbbvk\") pod \"ironic-operator-controller-manager-554564d7fc-fj4tn\" (UID: \"4f343be8-a654-43ac-938a-6b726caab1ad\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.105375 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.105886 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.106193 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcrgj\" (UniqueName: \"kubernetes.io/projected/7ab15a3b-5688-4d42-b99a-e88bb8b11f65-kube-api-access-pcrgj\") pod \"horizon-operator-controller-manager-5b9b8895d5-t6t6b\" (UID: \"7ab15a3b-5688-4d42-b99a-e88bb8b11f65\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.113047 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vctj4\" (UniqueName: \"kubernetes.io/projected/903ed1dc-819c-4ed9-86f6-ca32e4f96792-kube-api-access-vctj4\") pod \"keystone-operator-controller-manager-b4d948c87-lgdx6\" (UID: \"903ed1dc-819c-4ed9-86f6-ca32e4f96792\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.114867 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hds88\" (UniqueName: \"kubernetes.io/projected/047df55d-9730-4215-bbd5-73fd59a0e9f5-kube-api-access-hds88\") pod \"manila-operator-controller-manager-54f6768c69-pbn9f\" (UID: \"047df55d-9730-4215-bbd5-73fd59a0e9f5\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.119349 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.120093 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.129975 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-szg66" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.136823 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8dqv\" (UniqueName: \"kubernetes.io/projected/2ae82741-a73e-4d45-852f-a206550cb1e9-kube-api-access-d8dqv\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.158531 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.159096 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swgk9\" (UniqueName: \"kubernetes.io/projected/2b4b6741-5442-4ef0-a8e1-49e389157cd4-kube-api-access-swgk9\") pod \"nova-operator-controller-manager-567668f5cf-c96wv\" (UID: \"2b4b6741-5442-4ef0-a8e1-49e389157cd4\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.159143 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h4l4\" (UniqueName: \"kubernetes.io/projected/9ec17569-aac1-4b58-8efc-b5a483e47a71-kube-api-access-6h4l4\") pod \"neutron-operator-controller-manager-64ddbf8bb-lzhqv\" (UID: \"9ec17569-aac1-4b58-8efc-b5a483e47a71\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.159197 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz9dd\" (UniqueName: \"kubernetes.io/projected/d922fcc6-f8a7-451a-b998-fc04189a6d85-kube-api-access-nz9dd\") pod \"octavia-operator-controller-manager-69f8888797-54dzd\" (UID: \"d922fcc6-f8a7-451a-b998-fc04189a6d85\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.159266 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdpbs\" (UniqueName: \"kubernetes.io/projected/b248c78b-0213-4833-8d04-7d2514c2e673-kube-api-access-rdpbs\") pod \"mariadb-operator-controller-manager-6994f66f48-gxpq6\" (UID: \"b248c78b-0213-4833-8d04-7d2514c2e673\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.159736 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.159888 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.175533 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.175717 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cbvg5" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.207286 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h4l4\" (UniqueName: \"kubernetes.io/projected/9ec17569-aac1-4b58-8efc-b5a483e47a71-kube-api-access-6h4l4\") pod \"neutron-operator-controller-manager-64ddbf8bb-lzhqv\" (UID: \"9ec17569-aac1-4b58-8efc-b5a483e47a71\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.208130 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.212909 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdpbs\" (UniqueName: \"kubernetes.io/projected/b248c78b-0213-4833-8d04-7d2514c2e673-kube-api-access-rdpbs\") pod \"mariadb-operator-controller-manager-6994f66f48-gxpq6\" (UID: \"b248c78b-0213-4833-8d04-7d2514c2e673\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.252162 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.261474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqdbq\" (UniqueName: \"kubernetes.io/projected/9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1-kube-api-access-mqdbq\") pod \"ovn-operator-controller-manager-d44cf6b75-2dfxn\" (UID: \"9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.261532 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.261562 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66bhv\" (UniqueName: \"kubernetes.io/projected/c4453479-1bc9-4393-8853-396ec6ae4f7f-kube-api-access-66bhv\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.261600 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swgk9\" (UniqueName: \"kubernetes.io/projected/2b4b6741-5442-4ef0-a8e1-49e389157cd4-kube-api-access-swgk9\") pod \"nova-operator-controller-manager-567668f5cf-c96wv\" (UID: \"2b4b6741-5442-4ef0-a8e1-49e389157cd4\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.261942 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz9dd\" (UniqueName: \"kubernetes.io/projected/d922fcc6-f8a7-451a-b998-fc04189a6d85-kube-api-access-nz9dd\") pod \"octavia-operator-controller-manager-69f8888797-54dzd\" (UID: \"d922fcc6-f8a7-451a-b998-fc04189a6d85\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.272449 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.273313 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.279642 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-phlw6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.282728 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.299805 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swgk9\" (UniqueName: \"kubernetes.io/projected/2b4b6741-5442-4ef0-a8e1-49e389157cd4-kube-api-access-swgk9\") pod \"nova-operator-controller-manager-567668f5cf-c96wv\" (UID: \"2b4b6741-5442-4ef0-a8e1-49e389157cd4\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.304364 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz9dd\" (UniqueName: \"kubernetes.io/projected/d922fcc6-f8a7-451a-b998-fc04189a6d85-kube-api-access-nz9dd\") pod \"octavia-operator-controller-manager-69f8888797-54dzd\" (UID: \"d922fcc6-f8a7-451a-b998-fc04189a6d85\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.325827 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.358994 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.392991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqdbq\" (UniqueName: \"kubernetes.io/projected/9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1-kube-api-access-mqdbq\") pod \"ovn-operator-controller-manager-d44cf6b75-2dfxn\" (UID: \"9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.393500 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.394778 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfzqx\" (UniqueName: \"kubernetes.io/projected/18cf798f-3eea-4e15-8bb1-bda4895ffed4-kube-api-access-tfzqx\") pod \"placement-operator-controller-manager-8497b45c89-n6dpn\" (UID: \"18cf798f-3eea-4e15-8bb1-bda4895ffed4\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.394903 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66bhv\" (UniqueName: \"kubernetes.io/projected/c4453479-1bc9-4393-8853-396ec6ae4f7f-kube-api-access-66bhv\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.393903 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.396093 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert podName:c4453479-1bc9-4393-8853-396ec6ae4f7f nodeName:}" failed. No retries permitted until 2026-02-21 07:02:48.895883335 +0000 UTC m=+943.928967533 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" (UID: "c4453479-1bc9-4393-8853-396ec6ae4f7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.409521 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.418373 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.431650 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.437740 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2rktp" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.437980 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66bhv\" (UniqueName: \"kubernetes.io/projected/c4453479-1bc9-4393-8853-396ec6ae4f7f-kube-api-access-66bhv\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.438887 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.439407 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.444256 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.455301 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqdbq\" (UniqueName: \"kubernetes.io/projected/9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1-kube-api-access-mqdbq\") pod \"ovn-operator-controller-manager-d44cf6b75-2dfxn\" (UID: \"9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.459579 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.460471 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.464652 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jmpnv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.477868 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.485359 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.489971 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.497813 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfzqx\" (UniqueName: \"kubernetes.io/projected/18cf798f-3eea-4e15-8bb1-bda4895ffed4-kube-api-access-tfzqx\") pod \"placement-operator-controller-manager-8497b45c89-n6dpn\" (UID: \"18cf798f-3eea-4e15-8bb1-bda4895ffed4\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.525554 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfzqx\" (UniqueName: \"kubernetes.io/projected/18cf798f-3eea-4e15-8bb1-bda4895ffed4-kube-api-access-tfzqx\") pod \"placement-operator-controller-manager-8497b45c89-n6dpn\" (UID: \"18cf798f-3eea-4e15-8bb1-bda4895ffed4\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.539344 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-whrpt"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.540339 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.551658 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xk2hv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.562254 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-whrpt"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.568027 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.569117 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.569948 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.570690 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-dgrsg" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.590717 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.598810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.598872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfddc\" (UniqueName: \"kubernetes.io/projected/412bd84a-46bb-49b9-8d0a-17d6cc683ea0-kube-api-access-xfddc\") pod \"swift-operator-controller-manager-68f46476f-cv9cl\" (UID: \"412bd84a-46bb-49b9-8d0a-17d6cc683ea0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.598919 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlmzd\" (UniqueName: \"kubernetes.io/projected/246cc20b-aa24-4c15-8eb7-659e10b21e92-kube-api-access-xlmzd\") pod \"telemetry-operator-controller-manager-7f45b4ff68-jdxhc\" (UID: \"246cc20b-aa24-4c15-8eb7-659e10b21e92\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.598998 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.599082 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert podName:2ae82741-a73e-4d45-852f-a206550cb1e9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:49.599066185 +0000 UTC m=+944.632150383 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert") pod "infra-operator-controller-manager-79d975b745-qvl8t" (UID: "2ae82741-a73e-4d45-852f-a206550cb1e9") : secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.601612 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.606216 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.608111 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.614376 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cvbk4" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.614565 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.615294 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.615432 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.642475 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.643356 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.647527 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.661410 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-vjgjh" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.700049 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.700174 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfddc\" (UniqueName: \"kubernetes.io/projected/412bd84a-46bb-49b9-8d0a-17d6cc683ea0-kube-api-access-xfddc\") pod \"swift-operator-controller-manager-68f46476f-cv9cl\" (UID: \"412bd84a-46bb-49b9-8d0a-17d6cc683ea0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.700221 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6mrx\" (UniqueName: \"kubernetes.io/projected/ee323e4c-82c4-4b71-b69b-5aef22e36516-kube-api-access-l6mrx\") pod \"watcher-operator-controller-manager-5db88f68c-jt2g2\" (UID: \"ee323e4c-82c4-4b71-b69b-5aef22e36516\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.700393 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8zbj\" (UniqueName: \"kubernetes.io/projected/b425a24f-112c-4e36-a173-21a59ce15ef0-kube-api-access-s8zbj\") pod \"test-operator-controller-manager-7866795846-whrpt\" (UID: \"b425a24f-112c-4e36-a173-21a59ce15ef0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.700434 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlmzd\" (UniqueName: \"kubernetes.io/projected/246cc20b-aa24-4c15-8eb7-659e10b21e92-kube-api-access-xlmzd\") pod \"telemetry-operator-controller-manager-7f45b4ff68-jdxhc\" (UID: \"246cc20b-aa24-4c15-8eb7-659e10b21e92\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.700466 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.700514 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j28h\" (UniqueName: \"kubernetes.io/projected/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-kube-api-access-4j28h\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.718979 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlmzd\" (UniqueName: \"kubernetes.io/projected/246cc20b-aa24-4c15-8eb7-659e10b21e92-kube-api-access-xlmzd\") pod \"telemetry-operator-controller-manager-7f45b4ff68-jdxhc\" (UID: \"246cc20b-aa24-4c15-8eb7-659e10b21e92\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.750121 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.753464 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfddc\" (UniqueName: \"kubernetes.io/projected/412bd84a-46bb-49b9-8d0a-17d6cc683ea0-kube-api-access-xfddc\") pod \"swift-operator-controller-manager-68f46476f-cv9cl\" (UID: \"412bd84a-46bb-49b9-8d0a-17d6cc683ea0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.766695 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.785165 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.802342 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8zbj\" (UniqueName: \"kubernetes.io/projected/b425a24f-112c-4e36-a173-21a59ce15ef0-kube-api-access-s8zbj\") pod \"test-operator-controller-manager-7866795846-whrpt\" (UID: \"b425a24f-112c-4e36-a173-21a59ce15ef0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.802422 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.802455 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs87x\" (UniqueName: \"kubernetes.io/projected/fde95ed3-63bc-4401-b8b8-539da71db026-kube-api-access-vs87x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wv5gr\" (UID: \"fde95ed3-63bc-4401-b8b8-539da71db026\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.802524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j28h\" (UniqueName: \"kubernetes.io/projected/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-kube-api-access-4j28h\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.802592 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.802662 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6mrx\" (UniqueName: \"kubernetes.io/projected/ee323e4c-82c4-4b71-b69b-5aef22e36516-kube-api-access-l6mrx\") pod \"watcher-operator-controller-manager-5db88f68c-jt2g2\" (UID: \"ee323e4c-82c4-4b71-b69b-5aef22e36516\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.804227 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.804293 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:49.304276909 +0000 UTC m=+944.337361107 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.804987 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.805024 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:49.30501382 +0000 UTC m=+944.338098018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "metrics-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.830076 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8zbj\" (UniqueName: \"kubernetes.io/projected/b425a24f-112c-4e36-a173-21a59ce15ef0-kube-api-access-s8zbj\") pod \"test-operator-controller-manager-7866795846-whrpt\" (UID: \"b425a24f-112c-4e36-a173-21a59ce15ef0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.830715 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j28h\" (UniqueName: \"kubernetes.io/projected/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-kube-api-access-4j28h\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.841326 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6mrx\" (UniqueName: \"kubernetes.io/projected/ee323e4c-82c4-4b71-b69b-5aef22e36516-kube-api-access-l6mrx\") pod \"watcher-operator-controller-manager-5db88f68c-jt2g2\" (UID: \"ee323e4c-82c4-4b71-b69b-5aef22e36516\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.904305 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs87x\" (UniqueName: \"kubernetes.io/projected/fde95ed3-63bc-4401-b8b8-539da71db026-kube-api-access-vs87x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wv5gr\" (UID: \"fde95ed3-63bc-4401-b8b8-539da71db026\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.904526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.904700 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.904750 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert podName:c4453479-1bc9-4393-8853-396ec6ae4f7f nodeName:}" failed. No retries permitted until 2026-02-21 07:02:49.904729998 +0000 UTC m=+944.937814196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" (UID: "c4453479-1bc9-4393-8853-396ec6ae4f7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.915974 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.925226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs87x\" (UniqueName: \"kubernetes.io/projected/fde95ed3-63bc-4401-b8b8-539da71db026-kube-api-access-vs87x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wv5gr\" (UID: \"fde95ed3-63bc-4401-b8b8-539da71db026\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.929860 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.054503 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.227517 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.316000 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.316096 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.316214 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.316279 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:50.316261508 +0000 UTC m=+945.349345706 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "metrics-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.316289 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.316358 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:50.3163408 +0000 UTC m=+945.349424998 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "webhook-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.333331 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh"] Feb 21 07:02:49 crc kubenswrapper[4820]: W0221 07:02:49.358451 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4f64d1a_4768_48e1_8a88_fbf906956528.slice/crio-2d7704a09a082fca33ed3d28d0fb40c3739bb04e903dcd859da5c1e13327b705 WatchSource:0}: Error finding container 2d7704a09a082fca33ed3d28d0fb40c3739bb04e903dcd859da5c1e13327b705: Status 404 returned error can't find the container with id 2d7704a09a082fca33ed3d28d0fb40c3739bb04e903dcd859da5c1e13327b705 Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.360793 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.365200 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" event={"ID":"76209e29-400d-4677-85b5-89c5f4e9323a","Type":"ContainerStarted","Data":"4d66cbc3a7a6b364dc1dc6a92cf7ae7e2f06554f15e521a5d7d46d3b0f20e0de"} Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.367955 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" event={"ID":"f8cd79d8-6ba2-467c-95b5-4d965d73ed75","Type":"ContainerStarted","Data":"f7b933ff31f42e50bfce992517a26b26b8af281d2e8643ce1d50bb087202ddf8"} Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.369446 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" event={"ID":"f8b2e5d3-e795-4971-92d9-f0d8f6586fa8","Type":"ContainerStarted","Data":"f19deb5a5b775ac444d24c0d4822d017593bd49e6a481e4bbabbf9fd6cb7a3e0"} Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.369888 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.531587 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6"] Feb 21 07:02:49 crc kubenswrapper[4820]: W0221 07:02:49.569322 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod903ed1dc_819c_4ed9_86f6_ca32e4f96792.slice/crio-8cdbbaa051b8506fa2e3d8861abbf091d57f7b8c13107727c7ead7ac9763dab0 WatchSource:0}: Error finding container 8cdbbaa051b8506fa2e3d8861abbf091d57f7b8c13107727c7ead7ac9763dab0: Status 404 returned error can't find the container with id 8cdbbaa051b8506fa2e3d8861abbf091d57f7b8c13107727c7ead7ac9763dab0 Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.570287 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.594149 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.624369 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.624484 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.624519 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert podName:2ae82741-a73e-4d45-852f-a206550cb1e9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:51.624506702 +0000 UTC m=+946.657590900 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert") pod "infra-operator-controller-manager-79d975b745-qvl8t" (UID: "2ae82741-a73e-4d45-852f-a206550cb1e9") : secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.763884 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6"] Feb 21 07:02:49 crc kubenswrapper[4820]: W0221 07:02:49.800522 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b4b6741_5442_4ef0_a8e1_49e389157cd4.slice/crio-2b9d7512cdf3318cd08907c77852817a19ae8aa07f75271a73e862325dcd09c8 WatchSource:0}: Error finding container 2b9d7512cdf3318cd08907c77852817a19ae8aa07f75271a73e862325dcd09c8: Status 404 returned error can't find the container with id 2b9d7512cdf3318cd08907c77852817a19ae8aa07f75271a73e862325dcd09c8 Feb 21 07:02:49 crc kubenswrapper[4820]: W0221 07:02:49.801400 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ec17569_aac1_4b58_8efc_b5a483e47a71.slice/crio-6f567157c276856ef25fd2b1dd5e118e4274e305fbdd8bd8c037fe9b1a4c8936 WatchSource:0}: Error finding container 6f567157c276856ef25fd2b1dd5e118e4274e305fbdd8bd8c037fe9b1a4c8936: Status 404 returned error can't find the container with id 6f567157c276856ef25fd2b1dd5e118e4274e305fbdd8bd8c037fe9b1a4c8936 Feb 21 07:02:49 crc kubenswrapper[4820]: W0221 07:02:49.802691 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb425a24f_112c_4e36_a173_21a59ce15ef0.slice/crio-71427176ffb9b4df65ddbf2ac59d17d9ef25b79e33f52af8f86b412cfcd7956e WatchSource:0}: Error finding container 71427176ffb9b4df65ddbf2ac59d17d9ef25b79e33f52af8f86b412cfcd7956e: Status 404 returned error can't find the container with id 71427176ffb9b4df65ddbf2ac59d17d9ef25b79e33f52af8f86b412cfcd7956e Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.810541 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv"] Feb 21 07:02:49 crc kubenswrapper[4820]: W0221 07:02:49.812125 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd922fcc6_f8a7_451a_b998_fc04189a6d85.slice/crio-914d345c3e27fb7208b33060f65e210b75374c68f5bc41809d4531cac0652700 WatchSource:0}: Error finding container 914d345c3e27fb7208b33060f65e210b75374c68f5bc41809d4531cac0652700: Status 404 returned error can't find the container with id 914d345c3e27fb7208b33060f65e210b75374c68f5bc41809d4531cac0652700 Feb 21 07:02:49 crc kubenswrapper[4820]: W0221 07:02:49.820136 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod047df55d_9730_4215_bbd5_73fd59a0e9f5.slice/crio-238c7fa321833b1fe70625d1048d9834384f190e14b773185df4a3e7beb269d7 WatchSource:0}: Error finding container 238c7fa321833b1fe70625d1048d9834384f190e14b773185df4a3e7beb269d7: Status 404 returned error can't find the container with id 238c7fa321833b1fe70625d1048d9834384f190e14b773185df4a3e7beb269d7 Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.823356 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hds88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-pbn9f_openstack-operators(047df55d-9730-4215-bbd5-73fd59a0e9f5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.824543 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" podUID="047df55d-9730-4215-bbd5-73fd59a0e9f5" Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.824895 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.838849 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-whrpt"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.844840 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd"] Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.847545 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vs87x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-wv5gr_openstack-operators(fde95ed3-63bc-4401-b8b8-539da71db026): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.849294 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" podUID="fde95ed3-63bc-4401-b8b8-539da71db026" Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.852384 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.860777 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.931829 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.931980 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.932028 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert podName:c4453479-1bc9-4393-8853-396ec6ae4f7f nodeName:}" failed. No retries permitted until 2026-02-21 07:02:51.932014956 +0000 UTC m=+946.965099154 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" (UID: "c4453479-1bc9-4393-8853-396ec6ae4f7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.950370 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.960641 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.969535 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.997069 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2"] Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.010142 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mqdbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-2dfxn_openstack-operators(9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.011916 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" podUID="9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.014602 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn"] Feb 21 07:02:50 crc kubenswrapper[4820]: W0221 07:02:50.028550 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod246cc20b_aa24_4c15_8eb7_659e10b21e92.slice/crio-55359df701475acd9773ca510be877e78c4e3df4e5458ed15018be8c60a8c535 WatchSource:0}: Error finding container 55359df701475acd9773ca510be877e78c4e3df4e5458ed15018be8c60a8c535: Status 404 returned error can't find the container with id 55359df701475acd9773ca510be877e78c4e3df4e5458ed15018be8c60a8c535 Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.037831 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l6mrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-jt2g2_openstack-operators(ee323e4c-82c4-4b71-b69b-5aef22e36516): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.038283 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xlmzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-jdxhc_openstack-operators(246cc20b-aa24-4c15-8eb7-659e10b21e92): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.038343 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tfzqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-n6dpn_openstack-operators(18cf798f-3eea-4e15-8bb1-bda4895ffed4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.040357 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" podUID="246cc20b-aa24-4c15-8eb7-659e10b21e92" Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.040399 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" podUID="ee323e4c-82c4-4b71-b69b-5aef22e36516" Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.046728 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" podUID="18cf798f-3eea-4e15-8bb1-bda4895ffed4" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.337743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.337882 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.338044 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.338094 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:52.338079496 +0000 UTC m=+947.371163694 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "metrics-server-cert" not found Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.338350 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.338430 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:52.338421655 +0000 UTC m=+947.371505853 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "webhook-server-cert" not found Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.401724 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" event={"ID":"a4f64d1a-4768-48e1-8a88-fbf906956528","Type":"ContainerStarted","Data":"2d7704a09a082fca33ed3d28d0fb40c3739bb04e903dcd859da5c1e13327b705"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.403400 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" event={"ID":"412bd84a-46bb-49b9-8d0a-17d6cc683ea0","Type":"ContainerStarted","Data":"dd299d62dc5c35118e7e6265e9cd896a73d81feac775240edd74923068b3298f"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.405696 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" event={"ID":"2b4b6741-5442-4ef0-a8e1-49e389157cd4","Type":"ContainerStarted","Data":"2b9d7512cdf3318cd08907c77852817a19ae8aa07f75271a73e862325dcd09c8"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.431051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" event={"ID":"903ed1dc-819c-4ed9-86f6-ca32e4f96792","Type":"ContainerStarted","Data":"8cdbbaa051b8506fa2e3d8861abbf091d57f7b8c13107727c7ead7ac9763dab0"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.432864 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" event={"ID":"4f343be8-a654-43ac-938a-6b726caab1ad","Type":"ContainerStarted","Data":"d9a5908311079faa3baaaeb94ffdddf091c57f657f0fd7a55b40753d269b5223"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.434444 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" event={"ID":"9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1","Type":"ContainerStarted","Data":"7eff22a697ed68495406267845fb52e365ce4dd2f95235d58e179c2936821aec"} Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.454143 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" podUID="9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.458622 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" event={"ID":"7ab15a3b-5688-4d42-b99a-e88bb8b11f65","Type":"ContainerStarted","Data":"61f5b235390f23cc566913940568830592187ea0068dd0f8cf4fc5c0a317b3c2"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.464531 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" event={"ID":"ee323e4c-82c4-4b71-b69b-5aef22e36516","Type":"ContainerStarted","Data":"3e7fd551c96c2715af94302735fcbe6de47cad5e9f3b780eb3bef8d7facbb894"} Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.466539 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" podUID="ee323e4c-82c4-4b71-b69b-5aef22e36516" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.467059 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" event={"ID":"3c9c6322-ba57-47b3-a079-ab86a6660c45","Type":"ContainerStarted","Data":"3ea44f73cec2b4e1192dc3d93a76184096c293bb16b09a199bf3b56d99a755af"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.468052 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" event={"ID":"047df55d-9730-4215-bbd5-73fd59a0e9f5","Type":"ContainerStarted","Data":"238c7fa321833b1fe70625d1048d9834384f190e14b773185df4a3e7beb269d7"} Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.473487 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" podUID="047df55d-9730-4215-bbd5-73fd59a0e9f5" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.475126 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" event={"ID":"d922fcc6-f8a7-451a-b998-fc04189a6d85","Type":"ContainerStarted","Data":"914d345c3e27fb7208b33060f65e210b75374c68f5bc41809d4531cac0652700"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.477141 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" event={"ID":"b425a24f-112c-4e36-a173-21a59ce15ef0","Type":"ContainerStarted","Data":"71427176ffb9b4df65ddbf2ac59d17d9ef25b79e33f52af8f86b412cfcd7956e"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.490547 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" event={"ID":"246cc20b-aa24-4c15-8eb7-659e10b21e92","Type":"ContainerStarted","Data":"55359df701475acd9773ca510be877e78c4e3df4e5458ed15018be8c60a8c535"} Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.492275 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" podUID="246cc20b-aa24-4c15-8eb7-659e10b21e92" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.493315 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" event={"ID":"fde95ed3-63bc-4401-b8b8-539da71db026","Type":"ContainerStarted","Data":"c9626e57cda54e43b32583b2356d1f5fd7d32112ea4e926be95e177a15695cc3"} Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.496396 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" podUID="fde95ed3-63bc-4401-b8b8-539da71db026" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.497659 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" event={"ID":"9ec17569-aac1-4b58-8efc-b5a483e47a71","Type":"ContainerStarted","Data":"6f567157c276856ef25fd2b1dd5e118e4274e305fbdd8bd8c037fe9b1a4c8936"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.504744 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" event={"ID":"b248c78b-0213-4833-8d04-7d2514c2e673","Type":"ContainerStarted","Data":"929fb73c0f97e8b5d983502beb60707e955b7c1090b1c172e8d0983704755121"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.518370 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" event={"ID":"18cf798f-3eea-4e15-8bb1-bda4895ffed4","Type":"ContainerStarted","Data":"db96f1c5bb4c0a627bc2a549d99a0ada9bd2b0d99f8231e397c5e41e941157f9"} Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.519756 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" podUID="18cf798f-3eea-4e15-8bb1-bda4895ffed4" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.539948 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" podUID="9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.540305 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" podUID="18cf798f-3eea-4e15-8bb1-bda4895ffed4" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.540346 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" podUID="ee323e4c-82c4-4b71-b69b-5aef22e36516" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.540543 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" podUID="246cc20b-aa24-4c15-8eb7-659e10b21e92" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.540635 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" podUID="fde95ed3-63bc-4401-b8b8-539da71db026" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.540750 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" podUID="047df55d-9730-4215-bbd5-73fd59a0e9f5" Feb 21 07:02:51 crc kubenswrapper[4820]: I0221 07:02:51.675356 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.675603 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.675671 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert podName:2ae82741-a73e-4d45-852f-a206550cb1e9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:55.675642922 +0000 UTC m=+950.708727120 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert") pod "infra-operator-controller-manager-79d975b745-qvl8t" (UID: "2ae82741-a73e-4d45-852f-a206550cb1e9") : secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:51 crc kubenswrapper[4820]: I0221 07:02:51.982092 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.982514 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.982555 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert podName:c4453479-1bc9-4393-8853-396ec6ae4f7f nodeName:}" failed. No retries permitted until 2026-02-21 07:02:55.982542879 +0000 UTC m=+951.015627077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" (UID: "c4453479-1bc9-4393-8853-396ec6ae4f7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:52 crc kubenswrapper[4820]: I0221 07:02:52.389640 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:52 crc kubenswrapper[4820]: I0221 07:02:52.389759 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:52 crc kubenswrapper[4820]: E0221 07:02:52.389812 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 21 07:02:52 crc kubenswrapper[4820]: E0221 07:02:52.389886 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:56.389868243 +0000 UTC m=+951.422952441 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "metrics-server-cert" not found Feb 21 07:02:52 crc kubenswrapper[4820]: E0221 07:02:52.389930 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 21 07:02:52 crc kubenswrapper[4820]: E0221 07:02:52.390003 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:56.389987307 +0000 UTC m=+951.423071505 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "webhook-server-cert" not found Feb 21 07:02:55 crc kubenswrapper[4820]: I0221 07:02:55.750324 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:55 crc kubenswrapper[4820]: E0221 07:02:55.750525 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:55 crc kubenswrapper[4820]: E0221 07:02:55.750812 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert podName:2ae82741-a73e-4d45-852f-a206550cb1e9 nodeName:}" failed. No retries permitted until 2026-02-21 07:03:03.75079029 +0000 UTC m=+958.783874508 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert") pod "infra-operator-controller-manager-79d975b745-qvl8t" (UID: "2ae82741-a73e-4d45-852f-a206550cb1e9") : secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:56 crc kubenswrapper[4820]: I0221 07:02:56.055045 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:56 crc kubenswrapper[4820]: E0221 07:02:56.055273 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:56 crc kubenswrapper[4820]: E0221 07:02:56.055321 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert podName:c4453479-1bc9-4393-8853-396ec6ae4f7f nodeName:}" failed. No retries permitted until 2026-02-21 07:03:04.055305772 +0000 UTC m=+959.088389970 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" (UID: "c4453479-1bc9-4393-8853-396ec6ae4f7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:56 crc kubenswrapper[4820]: I0221 07:02:56.458532 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:56 crc kubenswrapper[4820]: I0221 07:02:56.458640 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:56 crc kubenswrapper[4820]: E0221 07:02:56.458820 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 21 07:02:56 crc kubenswrapper[4820]: E0221 07:02:56.458887 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:03:04.458867373 +0000 UTC m=+959.491951571 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "metrics-server-cert" not found Feb 21 07:02:56 crc kubenswrapper[4820]: E0221 07:02:56.459625 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 21 07:02:56 crc kubenswrapper[4820]: E0221 07:02:56.459679 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:03:04.459670566 +0000 UTC m=+959.492754764 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "webhook-server-cert" not found Feb 21 07:03:02 crc kubenswrapper[4820]: E0221 07:03:02.633839 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 21 07:03:02 crc kubenswrapper[4820]: E0221 07:03:02.634256 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rdpbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-gxpq6_openstack-operators(b248c78b-0213-4833-8d04-7d2514c2e673): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:03:02 crc kubenswrapper[4820]: E0221 07:03:02.636283 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" podUID="b248c78b-0213-4833-8d04-7d2514c2e673" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.110088 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.110288 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8zbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-whrpt_openstack-operators(b425a24f-112c-4e36-a173-21a59ce15ef0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.111480 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" podUID="b425a24f-112c-4e36-a173-21a59ce15ef0" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.575720 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.575985 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nz9dd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-54dzd_openstack-operators(d922fcc6-f8a7-451a-b998-fc04189a6d85): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.577363 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" podUID="d922fcc6-f8a7-451a-b998-fc04189a6d85" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.612756 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" podUID="b248c78b-0213-4833-8d04-7d2514c2e673" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.612800 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" podUID="b425a24f-112c-4e36-a173-21a59ce15ef0" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.613512 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" podUID="d922fcc6-f8a7-451a-b998-fc04189a6d85" Feb 21 07:03:03 crc kubenswrapper[4820]: I0221 07:03:03.762955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.763207 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.763292 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert podName:2ae82741-a73e-4d45-852f-a206550cb1e9 nodeName:}" failed. No retries permitted until 2026-02-21 07:03:19.763273796 +0000 UTC m=+974.796357994 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert") pod "infra-operator-controller-manager-79d975b745-qvl8t" (UID: "2ae82741-a73e-4d45-852f-a206550cb1e9") : secret "infra-operator-webhook-server-cert" not found Feb 21 07:03:04 crc kubenswrapper[4820]: I0221 07:03:04.066268 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.066423 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.066483 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert podName:c4453479-1bc9-4393-8853-396ec6ae4f7f nodeName:}" failed. No retries permitted until 2026-02-21 07:03:20.066464882 +0000 UTC m=+975.099549080 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" (UID: "c4453479-1bc9-4393-8853-396ec6ae4f7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.198712 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.199012 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vctj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-lgdx6_openstack-operators(903ed1dc-819c-4ed9-86f6-ca32e4f96792): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.200945 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" podUID="903ed1dc-819c-4ed9-86f6-ca32e4f96792" Feb 21 07:03:04 crc kubenswrapper[4820]: I0221 07:03:04.472670 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:04 crc kubenswrapper[4820]: I0221 07:03:04.472765 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.472904 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.472957 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:03:20.472943563 +0000 UTC m=+975.506027761 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "metrics-server-cert" not found Feb 21 07:03:04 crc kubenswrapper[4820]: I0221 07:03:04.491640 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.618207 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" podUID="903ed1dc-819c-4ed9-86f6-ca32e4f96792" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.887614 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.887816 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-swgk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-c96wv_openstack-operators(2b4b6741-5442-4ef0-a8e1-49e389157cd4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.889804 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" podUID="2b4b6741-5442-4ef0-a8e1-49e389157cd4" Feb 21 07:03:05 crc kubenswrapper[4820]: E0221 07:03:05.623733 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" podUID="2b4b6741-5442-4ef0-a8e1-49e389157cd4" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.646935 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" event={"ID":"f8b2e5d3-e795-4971-92d9-f0d8f6586fa8","Type":"ContainerStarted","Data":"3efb1612d0921340df29f5c3de8d0b4f622aa74662d0906671c5c6ccc58a542b"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.648278 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.655044 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" event={"ID":"76209e29-400d-4677-85b5-89c5f4e9323a","Type":"ContainerStarted","Data":"6d9b9afc82feef2c8d9a1dddcc7082fc801798bad04ecb345400daf07ec14804"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.655654 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.657425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" event={"ID":"4f343be8-a654-43ac-938a-6b726caab1ad","Type":"ContainerStarted","Data":"e04b135d38644b180cc79043c61f6d78d85c06447ffb8552469cfdae68b5eabe"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.657746 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.659179 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" event={"ID":"9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1","Type":"ContainerStarted","Data":"2fbf00e2d12cda0513ad1a2600bfb6a7b8f5bcb89fa784dbb8909ded05a8bbd9"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.659563 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.660823 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" event={"ID":"ee323e4c-82c4-4b71-b69b-5aef22e36516","Type":"ContainerStarted","Data":"5818006b1ac29dcaf3924d7b396a1dd51f4e95219b56af950dbb238eaf7a3e6a"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.661163 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.662700 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" event={"ID":"412bd84a-46bb-49b9-8d0a-17d6cc683ea0","Type":"ContainerStarted","Data":"75f82a51ac53be46bdb753d07d47a1b23c7dc37ea5b88ea20f7463ce69e15bff"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.663034 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.677031 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" event={"ID":"f8cd79d8-6ba2-467c-95b5-4d965d73ed75","Type":"ContainerStarted","Data":"7c7cdaa465dad51ea90a3c70f72d57d4edc51c585f666bfea58f1b53a11dd3c4"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.677201 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.681425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" event={"ID":"246cc20b-aa24-4c15-8eb7-659e10b21e92","Type":"ContainerStarted","Data":"aa0ff828d3aec8afe89ccbc3c96b86080552e5b1886ba11e6da68e6c03ea4bd3"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.681612 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.683654 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" event={"ID":"a4f64d1a-4768-48e1-8a88-fbf906956528","Type":"ContainerStarted","Data":"cdd4555cd1536c798acec8a30a3778bfbc0197323d6eee9b50d35f749f5d38b6"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.683707 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.687168 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" podStartSLOduration=4.319142608 podStartE2EDuration="19.687149745s" podCreationTimestamp="2026-02-21 07:02:48 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.992677905 +0000 UTC m=+945.025762103" lastFinishedPulling="2026-02-21 07:03:05.360685042 +0000 UTC m=+960.393769240" observedRunningTime="2026-02-21 07:03:07.686066895 +0000 UTC m=+962.719151093" watchObservedRunningTime="2026-02-21 07:03:07.687149745 +0000 UTC m=+962.720233943" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.688483 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" event={"ID":"7ab15a3b-5688-4d42-b99a-e88bb8b11f65","Type":"ContainerStarted","Data":"7fbe38ad2de90b11fef6c18fe9745fa8ec471ac0e911e7f92b48ebf626fa0bf7"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.688885 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.712455 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" podStartSLOduration=4.616994117 podStartE2EDuration="20.712429217s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:48.755419883 +0000 UTC m=+943.788504081" lastFinishedPulling="2026-02-21 07:03:04.850854983 +0000 UTC m=+959.883939181" observedRunningTime="2026-02-21 07:03:07.671859387 +0000 UTC m=+962.704943585" watchObservedRunningTime="2026-02-21 07:03:07.712429217 +0000 UTC m=+962.745513415" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.716317 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" podStartSLOduration=4.25492438 podStartE2EDuration="20.716301082s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:50.009998449 +0000 UTC m=+945.043082637" lastFinishedPulling="2026-02-21 07:03:06.471375141 +0000 UTC m=+961.504459339" observedRunningTime="2026-02-21 07:03:07.714559595 +0000 UTC m=+962.747643793" watchObservedRunningTime="2026-02-21 07:03:07.716301082 +0000 UTC m=+962.749385280" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.729571 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" event={"ID":"9ec17569-aac1-4b58-8efc-b5a483e47a71","Type":"ContainerStarted","Data":"8218a4a7be71393ab0b35aeab3a95b224ec7185135c8c8fd2207d6939d5e0f4d"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.730438 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.751900 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" podStartSLOduration=3.957540904 podStartE2EDuration="20.751879336s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.592310091 +0000 UTC m=+944.625394289" lastFinishedPulling="2026-02-21 07:03:06.386648523 +0000 UTC m=+961.419732721" observedRunningTime="2026-02-21 07:03:07.746230292 +0000 UTC m=+962.779314490" watchObservedRunningTime="2026-02-21 07:03:07.751879336 +0000 UTC m=+962.784963534" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.767494 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" podStartSLOduration=4.663592071 podStartE2EDuration="20.767474633s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.255470954 +0000 UTC m=+944.288555152" lastFinishedPulling="2026-02-21 07:03:05.359353516 +0000 UTC m=+960.392437714" observedRunningTime="2026-02-21 07:03:07.760482251 +0000 UTC m=+962.793566449" watchObservedRunningTime="2026-02-21 07:03:07.767474633 +0000 UTC m=+962.800558851" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.773301 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" event={"ID":"3c9c6322-ba57-47b3-a079-ab86a6660c45","Type":"ContainerStarted","Data":"97a93a3ab3298b861e7917b549aacf53c604fe8d8181182d6bef844181a1c607"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.773836 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.779052 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" podStartSLOduration=3.355233225 podStartE2EDuration="19.779041329s" podCreationTimestamp="2026-02-21 07:02:48 +0000 UTC" firstStartedPulling="2026-02-21 07:02:50.037609824 +0000 UTC m=+945.070694032" lastFinishedPulling="2026-02-21 07:03:06.461417938 +0000 UTC m=+961.494502136" observedRunningTime="2026-02-21 07:03:07.77761039 +0000 UTC m=+962.810694598" watchObservedRunningTime="2026-02-21 07:03:07.779041329 +0000 UTC m=+962.812125527" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.841682 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" podStartSLOduration=3.415446973 podStartE2EDuration="19.841666673s" podCreationTimestamp="2026-02-21 07:02:48 +0000 UTC" firstStartedPulling="2026-02-21 07:02:50.037906442 +0000 UTC m=+945.070990640" lastFinishedPulling="2026-02-21 07:03:06.464126142 +0000 UTC m=+961.497210340" observedRunningTime="2026-02-21 07:03:07.840008597 +0000 UTC m=+962.873092795" watchObservedRunningTime="2026-02-21 07:03:07.841666673 +0000 UTC m=+962.874750871" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.843772 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" podStartSLOduration=4.845736406 podStartE2EDuration="20.84376487s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.361321411 +0000 UTC m=+944.394405609" lastFinishedPulling="2026-02-21 07:03:05.359349875 +0000 UTC m=+960.392434073" observedRunningTime="2026-02-21 07:03:07.81307277 +0000 UTC m=+962.846156968" watchObservedRunningTime="2026-02-21 07:03:07.84376487 +0000 UTC m=+962.876849068" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.857166 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" podStartSLOduration=4.318381367 podStartE2EDuration="20.857147416s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.345855088 +0000 UTC m=+944.378939286" lastFinishedPulling="2026-02-21 07:03:05.884621137 +0000 UTC m=+960.917705335" observedRunningTime="2026-02-21 07:03:07.856344444 +0000 UTC m=+962.889428642" watchObservedRunningTime="2026-02-21 07:03:07.857147416 +0000 UTC m=+962.890231614" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.872975 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" podStartSLOduration=5.839496744 podStartE2EDuration="20.872955308s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.817376729 +0000 UTC m=+944.850460927" lastFinishedPulling="2026-02-21 07:03:04.850835283 +0000 UTC m=+959.883919491" observedRunningTime="2026-02-21 07:03:07.871612372 +0000 UTC m=+962.904696570" watchObservedRunningTime="2026-02-21 07:03:07.872955308 +0000 UTC m=+962.906039506" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.925573 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" podStartSLOduration=3.886095859 podStartE2EDuration="20.925555088s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.368964619 +0000 UTC m=+944.402048817" lastFinishedPulling="2026-02-21 07:03:06.408423808 +0000 UTC m=+961.441508046" observedRunningTime="2026-02-21 07:03:07.896477772 +0000 UTC m=+962.929561970" watchObservedRunningTime="2026-02-21 07:03:07.925555088 +0000 UTC m=+962.958639286" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.929547 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" podStartSLOduration=4.14446856 podStartE2EDuration="20.929530777s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.623114554 +0000 UTC m=+944.656198752" lastFinishedPulling="2026-02-21 07:03:06.408176771 +0000 UTC m=+961.441260969" observedRunningTime="2026-02-21 07:03:07.91684462 +0000 UTC m=+962.949928818" watchObservedRunningTime="2026-02-21 07:03:07.929530777 +0000 UTC m=+962.962614965" Feb 21 07:03:11 crc kubenswrapper[4820]: I0221 07:03:11.798429 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" event={"ID":"18cf798f-3eea-4e15-8bb1-bda4895ffed4","Type":"ContainerStarted","Data":"416fcdccb11164b43d740b4b5d61c319f1d9403c1c679cb44ab47bdd46867e95"} Feb 21 07:03:11 crc kubenswrapper[4820]: I0221 07:03:11.800463 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" event={"ID":"fde95ed3-63bc-4401-b8b8-539da71db026","Type":"ContainerStarted","Data":"b1192e218f12fc5600249dff73a993446695f0c802cc54411c087ade54e19d94"} Feb 21 07:03:11 crc kubenswrapper[4820]: I0221 07:03:11.802131 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" event={"ID":"047df55d-9730-4215-bbd5-73fd59a0e9f5","Type":"ContainerStarted","Data":"6baf86d25604e57fcb0255cefe43d0b3f2a6212c279f009faa700c2e140acfb1"} Feb 21 07:03:11 crc kubenswrapper[4820]: I0221 07:03:11.802301 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" Feb 21 07:03:11 crc kubenswrapper[4820]: I0221 07:03:11.836077 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" podStartSLOduration=3.5746306580000002 podStartE2EDuration="24.836061792s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.82219867 +0000 UTC m=+944.855282868" lastFinishedPulling="2026-02-21 07:03:11.083629804 +0000 UTC m=+966.116714002" observedRunningTime="2026-02-21 07:03:11.835218358 +0000 UTC m=+966.868302576" watchObservedRunningTime="2026-02-21 07:03:11.836061792 +0000 UTC m=+966.869145990" Feb 21 07:03:11 crc kubenswrapper[4820]: I0221 07:03:11.839139 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" podStartSLOduration=3.793654939 podStartE2EDuration="24.839132945s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:50.038177689 +0000 UTC m=+945.071261887" lastFinishedPulling="2026-02-21 07:03:11.083655705 +0000 UTC m=+966.116739893" observedRunningTime="2026-02-21 07:03:11.821871263 +0000 UTC m=+966.854955461" watchObservedRunningTime="2026-02-21 07:03:11.839132945 +0000 UTC m=+966.872217143" Feb 21 07:03:11 crc kubenswrapper[4820]: I0221 07:03:11.858687 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" podStartSLOduration=2.560237253 podStartE2EDuration="23.85866744s" podCreationTimestamp="2026-02-21 07:02:48 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.84741267 +0000 UTC m=+944.880496868" lastFinishedPulling="2026-02-21 07:03:11.145842847 +0000 UTC m=+966.178927055" observedRunningTime="2026-02-21 07:03:11.852953604 +0000 UTC m=+966.886037802" watchObservedRunningTime="2026-02-21 07:03:11.85866744 +0000 UTC m=+966.891751638" Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.844268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" event={"ID":"b248c78b-0213-4833-8d04-7d2514c2e673","Type":"ContainerStarted","Data":"45b992f75781678fd8292861d5a3b08d45baa97eaa8e9537383efd48c7712d5d"} Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.845316 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.847119 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" event={"ID":"b425a24f-112c-4e36-a173-21a59ce15ef0","Type":"ContainerStarted","Data":"369f6c95ea59095a532c1b2410684bb776de55541e04a12e74ecdb16090ec5a1"} Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.847281 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.858640 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" podStartSLOduration=3.495121143 podStartE2EDuration="30.858622143s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.801755602 +0000 UTC m=+944.834839800" lastFinishedPulling="2026-02-21 07:03:17.165256602 +0000 UTC m=+972.198340800" observedRunningTime="2026-02-21 07:03:17.85665461 +0000 UTC m=+972.889738828" watchObservedRunningTime="2026-02-21 07:03:17.858622143 +0000 UTC m=+972.891706341" Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.872985 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" podStartSLOduration=2.524044033 podStartE2EDuration="29.872960105s" podCreationTimestamp="2026-02-21 07:02:48 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.816361311 +0000 UTC m=+944.849445499" lastFinishedPulling="2026-02-21 07:03:17.165277363 +0000 UTC m=+972.198361571" observedRunningTime="2026-02-21 07:03:17.86837111 +0000 UTC m=+972.901455308" watchObservedRunningTime="2026-02-21 07:03:17.872960105 +0000 UTC m=+972.906044303" Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.959632 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.976061 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.990299 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.060627 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.111154 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.163455 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.256819 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.415911 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.446311 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.573618 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.606675 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.608887 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.770252 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.789540 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.855114 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" event={"ID":"903ed1dc-819c-4ed9-86f6-ca32e4f96792","Type":"ContainerStarted","Data":"cdc349be90fbaebf78e8e8079ade1530f0921606722048af69812815eab14d4e"} Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.855293 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.856774 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" event={"ID":"d922fcc6-f8a7-451a-b998-fc04189a6d85","Type":"ContainerStarted","Data":"51ad6e3259f98f41ed77a7903e284798ad94f809c3733e9222ef28a185412b01"} Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.857109 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.887263 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" podStartSLOduration=3.363906742 podStartE2EDuration="31.887227937s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.58170655 +0000 UTC m=+944.614790748" lastFinishedPulling="2026-02-21 07:03:18.105027745 +0000 UTC m=+973.138111943" observedRunningTime="2026-02-21 07:03:18.868815343 +0000 UTC m=+973.901899541" watchObservedRunningTime="2026-02-21 07:03:18.887227937 +0000 UTC m=+973.920312135" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.889503 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" podStartSLOduration=3.6002155780000002 podStartE2EDuration="31.889493538s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.816262278 +0000 UTC m=+944.849346476" lastFinishedPulling="2026-02-21 07:03:18.105540218 +0000 UTC m=+973.138624436" observedRunningTime="2026-02-21 07:03:18.884889022 +0000 UTC m=+973.917973220" watchObservedRunningTime="2026-02-21 07:03:18.889493538 +0000 UTC m=+973.922577736" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.933051 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" Feb 21 07:03:19 crc kubenswrapper[4820]: I0221 07:03:19.804346 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:03:19 crc kubenswrapper[4820]: I0221 07:03:19.813820 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:03:19 crc kubenswrapper[4820]: I0221 07:03:19.863476 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" event={"ID":"2b4b6741-5442-4ef0-a8e1-49e389157cd4","Type":"ContainerStarted","Data":"366caa71f0aa3cbab5987da3dfb49df8274a416a7727bcfa3782ffcca2111cc2"} Feb 21 07:03:19 crc kubenswrapper[4820]: I0221 07:03:19.983998 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4xx9d" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.021161 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.129001 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.141000 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.241362 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t"] Feb 21 07:03:20 crc kubenswrapper[4820]: W0221 07:03:20.253446 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ae82741_a73e_4d45_852f_a206550cb1e9.slice/crio-911f8d447197d86f1226bd08893ebefa84718e47422f6e41fea5daadc181ccd4 WatchSource:0}: Error finding container 911f8d447197d86f1226bd08893ebefa84718e47422f6e41fea5daadc181ccd4: Status 404 returned error can't find the container with id 911f8d447197d86f1226bd08893ebefa84718e47422f6e41fea5daadc181ccd4 Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.382001 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cbvg5" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.395581 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.534327 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.542616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.627261 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf"] Feb 21 07:03:20 crc kubenswrapper[4820]: W0221 07:03:20.632180 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4453479_1bc9_4393_8853_396ec6ae4f7f.slice/crio-23bb1cb300d31a513840454ce99d329808f1e0da6d8a475f24bf0282d5202755 WatchSource:0}: Error finding container 23bb1cb300d31a513840454ce99d329808f1e0da6d8a475f24bf0282d5202755: Status 404 returned error can't find the container with id 23bb1cb300d31a513840454ce99d329808f1e0da6d8a475f24bf0282d5202755 Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.840474 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cvbk4" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.849109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.870595 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" event={"ID":"2ae82741-a73e-4d45-852f-a206550cb1e9","Type":"ContainerStarted","Data":"911f8d447197d86f1226bd08893ebefa84718e47422f6e41fea5daadc181ccd4"} Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.871895 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" event={"ID":"c4453479-1bc9-4393-8853-396ec6ae4f7f","Type":"ContainerStarted","Data":"23bb1cb300d31a513840454ce99d329808f1e0da6d8a475f24bf0282d5202755"} Feb 21 07:03:21 crc kubenswrapper[4820]: I0221 07:03:21.256916 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz"] Feb 21 07:03:21 crc kubenswrapper[4820]: W0221 07:03:21.258263 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5424a0f0_819f_46e7_9d7d_00bbe249e4a9.slice/crio-4f1f35f6aae1488e4981235f60624d5f39a723253fe5a67044d6cd280cb1e404 WatchSource:0}: Error finding container 4f1f35f6aae1488e4981235f60624d5f39a723253fe5a67044d6cd280cb1e404: Status 404 returned error can't find the container with id 4f1f35f6aae1488e4981235f60624d5f39a723253fe5a67044d6cd280cb1e404 Feb 21 07:03:21 crc kubenswrapper[4820]: I0221 07:03:21.880702 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" event={"ID":"5424a0f0-819f-46e7-9d7d-00bbe249e4a9","Type":"ContainerStarted","Data":"4f1f35f6aae1488e4981235f60624d5f39a723253fe5a67044d6cd280cb1e404"} Feb 21 07:03:21 crc kubenswrapper[4820]: I0221 07:03:21.880771 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" Feb 21 07:03:21 crc kubenswrapper[4820]: I0221 07:03:21.896524 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" podStartSLOduration=5.585497161 podStartE2EDuration="34.896503387s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.816486514 +0000 UTC m=+944.849570702" lastFinishedPulling="2026-02-21 07:03:19.12749273 +0000 UTC m=+974.160576928" observedRunningTime="2026-02-21 07:03:21.892729804 +0000 UTC m=+976.925814002" watchObservedRunningTime="2026-02-21 07:03:21.896503387 +0000 UTC m=+976.929587585" Feb 21 07:03:26 crc kubenswrapper[4820]: I0221 07:03:26.913442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" event={"ID":"5424a0f0-819f-46e7-9d7d-00bbe249e4a9","Type":"ContainerStarted","Data":"238def0c128f64bdd62a932bd33e3221602451cf84d633811f4b0dafb85d55c7"} Feb 21 07:03:26 crc kubenswrapper[4820]: I0221 07:03:26.913985 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:26 crc kubenswrapper[4820]: I0221 07:03:26.947774 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" podStartSLOduration=38.947750651 podStartE2EDuration="38.947750651s" podCreationTimestamp="2026-02-21 07:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:03:26.93597742 +0000 UTC m=+981.969061618" watchObservedRunningTime="2026-02-21 07:03:26.947750651 +0000 UTC m=+981.980834849" Feb 21 07:03:28 crc kubenswrapper[4820]: I0221 07:03:28.285853 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" Feb 21 07:03:28 crc kubenswrapper[4820]: I0221 07:03:28.442313 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" Feb 21 07:03:28 crc kubenswrapper[4820]: I0221 07:03:28.482181 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" Feb 21 07:03:28 crc kubenswrapper[4820]: I0221 07:03:28.492808 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" Feb 21 07:03:28 crc kubenswrapper[4820]: I0221 07:03:28.919383 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" Feb 21 07:03:29 crc kubenswrapper[4820]: I0221 07:03:29.931656 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" event={"ID":"2ae82741-a73e-4d45-852f-a206550cb1e9","Type":"ContainerStarted","Data":"edf491468e9dca651aff1278be210b4e198227f49dec2734353deac9141c0189"} Feb 21 07:03:29 crc kubenswrapper[4820]: I0221 07:03:29.933096 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:03:29 crc kubenswrapper[4820]: I0221 07:03:29.934038 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" event={"ID":"c4453479-1bc9-4393-8853-396ec6ae4f7f","Type":"ContainerStarted","Data":"5365ed08bcb909816ae09e97f0deef291904e77b7305621ff9678689b30831dd"} Feb 21 07:03:29 crc kubenswrapper[4820]: I0221 07:03:29.934131 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:03:29 crc kubenswrapper[4820]: I0221 07:03:29.952607 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" podStartSLOduration=34.386371238 podStartE2EDuration="42.952587158s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:03:20.255538186 +0000 UTC m=+975.288622394" lastFinishedPulling="2026-02-21 07:03:28.821754116 +0000 UTC m=+983.854838314" observedRunningTime="2026-02-21 07:03:29.947555541 +0000 UTC m=+984.980639749" watchObservedRunningTime="2026-02-21 07:03:29.952587158 +0000 UTC m=+984.985671356" Feb 21 07:03:29 crc kubenswrapper[4820]: I0221 07:03:29.981854 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" podStartSLOduration=34.796753388 podStartE2EDuration="42.981836846s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:03:20.634038006 +0000 UTC m=+975.667122204" lastFinishedPulling="2026-02-21 07:03:28.819121464 +0000 UTC m=+983.852205662" observedRunningTime="2026-02-21 07:03:29.970749114 +0000 UTC m=+985.003833322" watchObservedRunningTime="2026-02-21 07:03:29.981836846 +0000 UTC m=+985.014921044" Feb 21 07:03:40 crc kubenswrapper[4820]: I0221 07:03:40.027399 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:03:40 crc kubenswrapper[4820]: I0221 07:03:40.401399 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:03:40 crc kubenswrapper[4820]: I0221 07:03:40.857879 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:43 crc kubenswrapper[4820]: I0221 07:03:43.816255 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:03:43 crc kubenswrapper[4820]: I0221 07:03:43.816314 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.400851 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-9mgmj"] Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.403888 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.405696 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fjv69" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.410698 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.410732 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.412799 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.419926 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-9mgmj"] Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.481992 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-pdmt8"] Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.489855 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-pdmt8"] Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.489959 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.492513 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.553778 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgn5m\" (UniqueName: \"kubernetes.io/projected/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-kube-api-access-xgn5m\") pod \"dnsmasq-dns-855cbc58c5-9mgmj\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.553891 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-config\") pod \"dnsmasq-dns-855cbc58c5-9mgmj\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.655116 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-config\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.655187 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgn5m\" (UniqueName: \"kubernetes.io/projected/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-kube-api-access-xgn5m\") pod \"dnsmasq-dns-855cbc58c5-9mgmj\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.655251 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.655297 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbhzr\" (UniqueName: \"kubernetes.io/projected/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-kube-api-access-dbhzr\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.655339 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-config\") pod \"dnsmasq-dns-855cbc58c5-9mgmj\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.656194 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-config\") pod \"dnsmasq-dns-855cbc58c5-9mgmj\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.676641 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgn5m\" (UniqueName: \"kubernetes.io/projected/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-kube-api-access-xgn5m\") pod \"dnsmasq-dns-855cbc58c5-9mgmj\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.721227 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.757181 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.758418 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.758517 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbhzr\" (UniqueName: \"kubernetes.io/projected/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-kube-api-access-dbhzr\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.758591 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-config\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.759372 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-config\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.775536 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbhzr\" (UniqueName: \"kubernetes.io/projected/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-kube-api-access-dbhzr\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.808807 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.953442 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-9mgmj"] Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.958634 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:03:56 crc kubenswrapper[4820]: I0221 07:03:56.252007 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-pdmt8"] Feb 21 07:03:56 crc kubenswrapper[4820]: W0221 07:03:56.256496 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1d05b01_8b86_4bf7_9b3b_ed179f362f27.slice/crio-4eced59b9a2cc582ab81656f2dbcd3a050a75557bdd421766475cbe273279b71 WatchSource:0}: Error finding container 4eced59b9a2cc582ab81656f2dbcd3a050a75557bdd421766475cbe273279b71: Status 404 returned error can't find the container with id 4eced59b9a2cc582ab81656f2dbcd3a050a75557bdd421766475cbe273279b71 Feb 21 07:03:56 crc kubenswrapper[4820]: I0221 07:03:56.341197 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" event={"ID":"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443","Type":"ContainerStarted","Data":"5080cc5ff51ef876cebda0dd6972095bb3db5861f03d8743a19d9051a2266fb4"} Feb 21 07:03:56 crc kubenswrapper[4820]: I0221 07:03:56.342200 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" event={"ID":"c1d05b01-8b86-4bf7-9b3b-ed179f362f27","Type":"ContainerStarted","Data":"4eced59b9a2cc582ab81656f2dbcd3a050a75557bdd421766475cbe273279b71"} Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.746279 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-pdmt8"] Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.801082 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d6b9fdb89-b9hkw"] Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.802524 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.809084 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d6b9fdb89-b9hkw"] Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.896370 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kc55\" (UniqueName: \"kubernetes.io/projected/924235f7-e875-49cd-b7c1-1cfa96515a97-kube-api-access-6kc55\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.896720 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-dns-svc\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.896740 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-config\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.998259 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kc55\" (UniqueName: \"kubernetes.io/projected/924235f7-e875-49cd-b7c1-1cfa96515a97-kube-api-access-6kc55\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.998316 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-dns-svc\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.998408 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-config\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.000171 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-config\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.003547 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-dns-svc\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.051492 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kc55\" (UniqueName: \"kubernetes.io/projected/924235f7-e875-49cd-b7c1-1cfa96515a97-kube-api-access-6kc55\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.165644 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.176970 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-9mgmj"] Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.201060 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-47ln4"] Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.202765 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.211744 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hjs7\" (UniqueName: \"kubernetes.io/projected/85621024-c5dd-4598-817a-62024db91c1d-kube-api-access-2hjs7\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.211803 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-config\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.211872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-dns-svc\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.219125 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-47ln4"] Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.313006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-dns-svc\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.313104 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hjs7\" (UniqueName: \"kubernetes.io/projected/85621024-c5dd-4598-817a-62024db91c1d-kube-api-access-2hjs7\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.313134 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-config\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.314733 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-dns-svc\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.315073 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-config\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.335576 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hjs7\" (UniqueName: \"kubernetes.io/projected/85621024-c5dd-4598-817a-62024db91c1d-kube-api-access-2hjs7\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.573801 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.787193 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d6b9fdb89-b9hkw"] Feb 21 07:03:58 crc kubenswrapper[4820]: W0221 07:03:58.825904 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod924235f7_e875_49cd_b7c1_1cfa96515a97.slice/crio-5efec124cb6c6d1841f29bef1e715c600ce87eea191019eb95128737b62cd64c WatchSource:0}: Error finding container 5efec124cb6c6d1841f29bef1e715c600ce87eea191019eb95128737b62cd64c: Status 404 returned error can't find the container with id 5efec124cb6c6d1841f29bef1e715c600ce87eea191019eb95128737b62cd64c Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.997791 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.999570 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.001046 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.002108 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4n8x9" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.002609 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.002826 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.002939 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.004395 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.004532 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.011019 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.081296 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-47ln4"] Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131276 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131324 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa49984a-9511-4449-adc6-997899961f73-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131345 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa49984a-9511-4449-adc6-997899961f73-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131414 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131432 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131476 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131502 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131534 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131560 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbf58\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-kube-api-access-cbf58\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131576 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131595 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233161 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233208 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233266 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbf58\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-kube-api-access-cbf58\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233298 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233331 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233358 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa49984a-9511-4449-adc6-997899961f73-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233381 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa49984a-9511-4449-adc6-997899961f73-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233446 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233466 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233507 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.234563 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.234730 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.234886 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.235404 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.235566 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.235585 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.239148 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.239874 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa49984a-9511-4449-adc6-997899961f73-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.253580 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbf58\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-kube-api-access-cbf58\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.259459 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.270003 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.270640 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa49984a-9511-4449-adc6-997899961f73-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.307163 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.308605 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.312459 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.312484 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.312577 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.312763 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.312785 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.312865 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4bthw" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.312988 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.329492 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.343132 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.396051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" event={"ID":"85621024-c5dd-4598-817a-62024db91c1d","Type":"ContainerStarted","Data":"89ede790e040e0e9c21f3a91218ea509876d44fee835aa305d75785ff546742f"} Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.397628 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" event={"ID":"924235f7-e875-49cd-b7c1-1cfa96515a97","Type":"ContainerStarted","Data":"5efec124cb6c6d1841f29bef1e715c600ce87eea191019eb95128737b62cd64c"} Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.436951 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.436995 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9gg2\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-kube-api-access-k9gg2\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437015 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437044 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437198 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b1242f9-d2ac-493c-bc89-43f7be597a75-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437268 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437296 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b1242f9-d2ac-493c-bc89-43f7be597a75-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437350 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437482 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437535 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437557 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539573 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539664 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539691 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539750 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539781 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9gg2\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-kube-api-access-k9gg2\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539808 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539847 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539897 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b1242f9-d2ac-493c-bc89-43f7be597a75-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539933 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539958 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b1242f9-d2ac-493c-bc89-43f7be597a75-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.540004 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.541104 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.541187 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.541388 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.541526 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.541852 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.541945 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.547186 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b1242f9-d2ac-493c-bc89-43f7be597a75-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.557599 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b1242f9-d2ac-493c-bc89-43f7be597a75-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.561506 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.569280 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.570979 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.588658 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9gg2\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-kube-api-access-k9gg2\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.643297 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.627173 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.628326 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.630362 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.634300 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.634313 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ldndf" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.634416 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.636081 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.639258 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.759904 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.759948 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.759984 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmn4m\" (UniqueName: \"kubernetes.io/projected/6c6905da-351a-426d-a36c-0b05dfa993a9-kube-api-access-pmn4m\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.760004 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.760020 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.760033 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.760065 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.760096 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.861770 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.861868 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.861949 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.861971 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.862005 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmn4m\" (UniqueName: \"kubernetes.io/projected/6c6905da-351a-426d-a36c-0b05dfa993a9-kube-api-access-pmn4m\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.862029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.862049 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.862066 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.862428 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.862684 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.863123 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.863353 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.863852 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.880815 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.893020 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.893684 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmn4m\" (UniqueName: \"kubernetes.io/projected/6c6905da-351a-426d-a36c-0b05dfa993a9-kube-api-access-pmn4m\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.893870 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.960643 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.091962 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.093020 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.095256 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.095749 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-qq6hv" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.096023 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.096184 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.107548 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.197757 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.198647 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.200308 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.200431 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vhpwj" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.203961 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.215203 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.230891 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnbv2\" (UniqueName: \"kubernetes.io/projected/b81af4bd-d2af-4a26-8f4d-a3e612778607-kube-api-access-cnbv2\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.230959 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.231049 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.231088 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.231770 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.231821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.231860 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.231973 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333863 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333887 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333911 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333928 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333947 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcgq2\" (UniqueName: \"kubernetes.io/projected/4f99a57a-608b-4678-9be5-abc4347c8bcb-kube-api-access-mcgq2\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333969 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnbv2\" (UniqueName: \"kubernetes.io/projected/b81af4bd-d2af-4a26-8f4d-a3e612778607-kube-api-access-cnbv2\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.334008 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.334058 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.334080 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-kolla-config\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.334096 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-config-data\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.334114 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.334391 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.335694 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.336118 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.336254 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.339489 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.340343 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.340854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.354988 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.365305 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnbv2\" (UniqueName: \"kubernetes.io/projected/b81af4bd-d2af-4a26-8f4d-a3e612778607-kube-api-access-cnbv2\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.412050 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.435555 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-kolla-config\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.435613 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-config-data\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.435669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.435717 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcgq2\" (UniqueName: \"kubernetes.io/projected/4f99a57a-608b-4678-9be5-abc4347c8bcb-kube-api-access-mcgq2\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.435742 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.436334 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-kolla-config\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.437175 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-config-data\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.438815 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.440702 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.456900 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcgq2\" (UniqueName: \"kubernetes.io/projected/4f99a57a-608b-4678-9be5-abc4347c8bcb-kube-api-access-mcgq2\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.517131 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.392838 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.394692 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.398157 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-22vkm" Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.411915 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.563009 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz5jf\" (UniqueName: \"kubernetes.io/projected/df55e56a-dbd2-4082-8915-c095d79a0445-kube-api-access-pz5jf\") pod \"kube-state-metrics-0\" (UID: \"df55e56a-dbd2-4082-8915-c095d79a0445\") " pod="openstack/kube-state-metrics-0" Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.664696 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz5jf\" (UniqueName: \"kubernetes.io/projected/df55e56a-dbd2-4082-8915-c095d79a0445-kube-api-access-pz5jf\") pod \"kube-state-metrics-0\" (UID: \"df55e56a-dbd2-4082-8915-c095d79a0445\") " pod="openstack/kube-state-metrics-0" Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.686459 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz5jf\" (UniqueName: \"kubernetes.io/projected/df55e56a-dbd2-4082-8915-c095d79a0445-kube-api-access-pz5jf\") pod \"kube-state-metrics-0\" (UID: \"df55e56a-dbd2-4082-8915-c095d79a0445\") " pod="openstack/kube-state-metrics-0" Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.716794 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.606223 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sfpp9"] Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.607649 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.609119 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-hmdkm" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.609278 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.610790 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.613900 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rwsk7"] Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.616395 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.625511 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rwsk7"] Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.665312 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sfpp9"] Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706466 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg85j\" (UniqueName: \"kubernetes.io/projected/7880da24-89a6-4428-b9c1-5ffe6647af01-kube-api-access-hg85j\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706603 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-run\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706650 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-etc-ovs\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706682 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run-ovn\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-combined-ca-bundle\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706720 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-ovn-controller-tls-certs\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706757 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-log-ovn\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706805 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7880da24-89a6-4428-b9c1-5ffe6647af01-scripts\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706857 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706904 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-lib\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706926 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxmtq\" (UniqueName: \"kubernetes.io/projected/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-kube-api-access-hxmtq\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706942 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-log\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706960 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-scripts\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810173 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg85j\" (UniqueName: \"kubernetes.io/projected/7880da24-89a6-4428-b9c1-5ffe6647af01-kube-api-access-hg85j\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810262 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-run\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810291 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-etc-ovs\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810313 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run-ovn\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810329 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-combined-ca-bundle\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810347 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-ovn-controller-tls-certs\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810371 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-log-ovn\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810423 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7880da24-89a6-4428-b9c1-5ffe6647af01-scripts\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810458 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810488 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-lib\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxmtq\" (UniqueName: \"kubernetes.io/projected/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-kube-api-access-hxmtq\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-log\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810543 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-scripts\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.811604 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-run\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.811620 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-etc-ovs\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.811744 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run-ovn\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.811789 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-lib\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.811891 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-log-ovn\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.812139 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.812962 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-log\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.813177 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7880da24-89a6-4428-b9c1-5ffe6647af01-scripts\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.813836 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-scripts\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.815627 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-combined-ca-bundle\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.828061 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg85j\" (UniqueName: \"kubernetes.io/projected/7880da24-89a6-4428-b9c1-5ffe6647af01-kube-api-access-hg85j\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.828313 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-ovn-controller-tls-certs\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.828581 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxmtq\" (UniqueName: \"kubernetes.io/projected/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-kube-api-access-hxmtq\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.928094 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.942690 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.953432 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.964277 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.965585 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.966085 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qmlp6" Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.966184 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.966408 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.966600 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.969919 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.133540 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.133877 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs4qb\" (UniqueName: \"kubernetes.io/projected/455bfe0a-a135-4900-8b15-ce584dc8a5bb-kube-api-access-fs4qb\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.133903 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.133946 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.133962 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.133987 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.134030 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-config\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.134080 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.235851 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs4qb\" (UniqueName: \"kubernetes.io/projected/455bfe0a-a135-4900-8b15-ce584dc8a5bb-kube-api-access-fs4qb\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.235896 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.235915 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.235928 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.235953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.235984 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-config\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.236035 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.236121 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.236460 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.236480 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.237161 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-config\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.237478 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.248431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.255262 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.257777 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs4qb\" (UniqueName: \"kubernetes.io/projected/455bfe0a-a135-4900-8b15-ce584dc8a5bb-kube-api-access-fs4qb\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.258190 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.260658 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.287878 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.765771 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.768450 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.771843 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.771904 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-dg8hr" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.772062 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.775556 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.781194 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865314 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-config\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865650 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865734 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865763 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865784 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwpjj\" (UniqueName: \"kubernetes.io/projected/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-kube-api-access-lwpjj\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865829 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865846 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967285 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwpjj\" (UniqueName: \"kubernetes.io/projected/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-kube-api-access-lwpjj\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967334 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967358 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967376 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967456 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-config\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967498 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967551 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967574 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.968133 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.968285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.968612 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-config\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.969221 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.975459 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.975899 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.977618 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.983429 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwpjj\" (UniqueName: \"kubernetes.io/projected/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-kube-api-access-lwpjj\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.988891 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:11 crc kubenswrapper[4820]: I0221 07:04:11.093564 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:13 crc kubenswrapper[4820]: E0221 07:04:13.504050 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 21 07:04:13 crc kubenswrapper[4820]: E0221 07:04:13.504487 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dbhzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6fcf94d689-pdmt8_openstack(c1d05b01-8b86-4bf7-9b3b-ed179f362f27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:04:13 crc kubenswrapper[4820]: E0221 07:04:13.505820 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" podUID="c1d05b01-8b86-4bf7-9b3b-ed179f362f27" Feb 21 07:04:13 crc kubenswrapper[4820]: E0221 07:04:13.516573 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 21 07:04:13 crc kubenswrapper[4820]: E0221 07:04:13.516698 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgn5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-855cbc58c5-9mgmj_openstack(25fe0e65-8e41-4f6a-b4ba-499f9ffc6443): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:04:13 crc kubenswrapper[4820]: E0221 07:04:13.518047 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" podUID="25fe0e65-8e41-4f6a-b4ba-499f9ffc6443" Feb 21 07:04:13 crc kubenswrapper[4820]: I0221 07:04:13.816734 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:04:13 crc kubenswrapper[4820]: I0221 07:04:13.816962 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.103775 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.114760 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.135942 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.236761 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.243066 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.327794 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 07:04:14 crc kubenswrapper[4820]: W0221 07:04:14.335060 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod455bfe0a_a135_4900_8b15_ce584dc8a5bb.slice/crio-a5054f534bcacef82cd1fa270668d60a62e37baeb241caf361f2e16ba9351a1e WatchSource:0}: Error finding container a5054f534bcacef82cd1fa270668d60a62e37baeb241caf361f2e16ba9351a1e: Status 404 returned error can't find the container with id a5054f534bcacef82cd1fa270668d60a62e37baeb241caf361f2e16ba9351a1e Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.377656 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sfpp9"] Feb 21 07:04:14 crc kubenswrapper[4820]: W0221 07:04:14.395469 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod593c6a26_a16a_4cf6_8aa9_b20bb6d56da7.slice/crio-a93a231ccce463244e328090dedc1dcb1c07884205498f2d63cc04feabadacfe WatchSource:0}: Error finding container a93a231ccce463244e328090dedc1dcb1c07884205498f2d63cc04feabadacfe: Status 404 returned error can't find the container with id a93a231ccce463244e328090dedc1dcb1c07884205498f2d63cc04feabadacfe Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.403129 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 07:04:14 crc kubenswrapper[4820]: W0221 07:04:14.410591 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb81af4bd_d2af_4a26_8f4d_a3e612778607.slice/crio-11ff38cd3a84b9695da2170ae34b744fdcf1335c31df7ea094d308bb6b4a401a WatchSource:0}: Error finding container 11ff38cd3a84b9695da2170ae34b744fdcf1335c31df7ea094d308bb6b4a401a: Status 404 returned error can't find the container with id 11ff38cd3a84b9695da2170ae34b744fdcf1335c31df7ea094d308bb6b4a401a Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.436814 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rwsk7"] Feb 21 07:04:14 crc kubenswrapper[4820]: W0221 07:04:14.441408 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7880da24_89a6_4428_b9c1_5ffe6647af01.slice/crio-675fc4f5e2aff6c590607c714945d1b90c7e7d3a04e9fbfd0194ea4b92050e93 WatchSource:0}: Error finding container 675fc4f5e2aff6c590607c714945d1b90c7e7d3a04e9fbfd0194ea4b92050e93: Status 404 returned error can't find the container with id 675fc4f5e2aff6c590607c714945d1b90c7e7d3a04e9fbfd0194ea4b92050e93 Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.503302 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b81af4bd-d2af-4a26-8f4d-a3e612778607","Type":"ContainerStarted","Data":"11ff38cd3a84b9695da2170ae34b744fdcf1335c31df7ea094d308bb6b4a401a"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.505076 4820 generic.go:334] "Generic (PLEG): container finished" podID="85621024-c5dd-4598-817a-62024db91c1d" containerID="9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e" exitCode=0 Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.505128 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" event={"ID":"85621024-c5dd-4598-817a-62024db91c1d","Type":"ContainerDied","Data":"9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.506117 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"455bfe0a-a135-4900-8b15-ce584dc8a5bb","Type":"ContainerStarted","Data":"a5054f534bcacef82cd1fa270668d60a62e37baeb241caf361f2e16ba9351a1e"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.507543 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c6905da-351a-426d-a36c-0b05dfa993a9","Type":"ContainerStarted","Data":"506d7091e1481dd403657fac413ff300e649bdb874981551b296a055c67d3957"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.511364 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f99a57a-608b-4678-9be5-abc4347c8bcb","Type":"ContainerStarted","Data":"49654605e076770c4b1f63011fc38c031abfbddaf42bcc3556d4899ef0c6f4eb"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.512279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rwsk7" event={"ID":"7880da24-89a6-4428-b9c1-5ffe6647af01","Type":"ContainerStarted","Data":"675fc4f5e2aff6c590607c714945d1b90c7e7d3a04e9fbfd0194ea4b92050e93"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.513120 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9" event={"ID":"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7","Type":"ContainerStarted","Data":"a93a231ccce463244e328090dedc1dcb1c07884205498f2d63cc04feabadacfe"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.513892 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa49984a-9511-4449-adc6-997899961f73","Type":"ContainerStarted","Data":"c7e2b7a7c0a492a7d1fe2c8d85d83a8801b3d4fa1ad893af52ea27c7826ffccc"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.515556 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b1242f9-d2ac-493c-bc89-43f7be597a75","Type":"ContainerStarted","Data":"77697e6f65480c0a8c7ecc85d340b2d52d583c5d92b5093accb994850dd6cd98"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.516607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"df55e56a-dbd2-4082-8915-c095d79a0445","Type":"ContainerStarted","Data":"60eb280dafd317b213ced0ce92cb061208211ecad999bed743c8a76df9e0ad8d"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.518305 4820 generic.go:334] "Generic (PLEG): container finished" podID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerID="552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6" exitCode=0 Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.518381 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" event={"ID":"924235f7-e875-49cd-b7c1-1cfa96515a97","Type":"ContainerDied","Data":"552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6"} Feb 21 07:04:14 crc kubenswrapper[4820]: E0221 07:04:14.766463 4820 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 21 07:04:14 crc kubenswrapper[4820]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/924235f7-e875-49cd-b7c1-1cfa96515a97/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 21 07:04:14 crc kubenswrapper[4820]: > podSandboxID="5efec124cb6c6d1841f29bef1e715c600ce87eea191019eb95128737b62cd64c" Feb 21 07:04:14 crc kubenswrapper[4820]: E0221 07:04:14.766906 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 21 07:04:14 crc kubenswrapper[4820]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kc55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6d6b9fdb89-b9hkw_openstack(924235f7-e875-49cd-b7c1-1cfa96515a97): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/924235f7-e875-49cd-b7c1-1cfa96515a97/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 21 07:04:14 crc kubenswrapper[4820]: > logger="UnhandledError" Feb 21 07:04:14 crc kubenswrapper[4820]: E0221 07:04:14.768085 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/924235f7-e875-49cd-b7c1-1cfa96515a97/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.881229 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.957755 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-config\") pod \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.957914 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgn5m\" (UniqueName: \"kubernetes.io/projected/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-kube-api-access-xgn5m\") pod \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.958211 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-config" (OuterVolumeSpecName: "config") pod "25fe0e65-8e41-4f6a-b4ba-499f9ffc6443" (UID: "25fe0e65-8e41-4f6a-b4ba-499f9ffc6443"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.958419 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.966926 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-kube-api-access-xgn5m" (OuterVolumeSpecName: "kube-api-access-xgn5m") pod "25fe0e65-8e41-4f6a-b4ba-499f9ffc6443" (UID: "25fe0e65-8e41-4f6a-b4ba-499f9ffc6443"). InnerVolumeSpecName "kube-api-access-xgn5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.019678 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.059904 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgn5m\" (UniqueName: \"kubernetes.io/projected/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-kube-api-access-xgn5m\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.160600 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-dns-svc\") pod \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.160702 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-config\") pod \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.160750 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbhzr\" (UniqueName: \"kubernetes.io/projected/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-kube-api-access-dbhzr\") pod \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.161689 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1d05b01-8b86-4bf7-9b3b-ed179f362f27" (UID: "c1d05b01-8b86-4bf7-9b3b-ed179f362f27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.161815 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-config" (OuterVolumeSpecName: "config") pod "c1d05b01-8b86-4bf7-9b3b-ed179f362f27" (UID: "c1d05b01-8b86-4bf7-9b3b-ed179f362f27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.166155 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-kube-api-access-dbhzr" (OuterVolumeSpecName: "kube-api-access-dbhzr") pod "c1d05b01-8b86-4bf7-9b3b-ed179f362f27" (UID: "c1d05b01-8b86-4bf7-9b3b-ed179f362f27"). InnerVolumeSpecName "kube-api-access-dbhzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.225027 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.262931 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbhzr\" (UniqueName: \"kubernetes.io/projected/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-kube-api-access-dbhzr\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.262964 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.262973 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:15 crc kubenswrapper[4820]: W0221 07:04:15.489908 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf0c3ff8_e36f_4539_a7da_9d2b1e7a146d.slice/crio-b52687043d29455f8c5ffa92bb3e6d7984a2979aaab8cd8cfdef30f5b4f361f2 WatchSource:0}: Error finding container b52687043d29455f8c5ffa92bb3e6d7984a2979aaab8cd8cfdef30f5b4f361f2: Status 404 returned error can't find the container with id b52687043d29455f8c5ffa92bb3e6d7984a2979aaab8cd8cfdef30f5b4f361f2 Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.531810 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" event={"ID":"85621024-c5dd-4598-817a-62024db91c1d","Type":"ContainerStarted","Data":"ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad"} Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.531872 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.537582 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" event={"ID":"c1d05b01-8b86-4bf7-9b3b-ed179f362f27","Type":"ContainerDied","Data":"4eced59b9a2cc582ab81656f2dbcd3a050a75557bdd421766475cbe273279b71"} Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.537595 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.539384 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d","Type":"ContainerStarted","Data":"b52687043d29455f8c5ffa92bb3e6d7984a2979aaab8cd8cfdef30f5b4f361f2"} Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.541543 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" event={"ID":"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443","Type":"ContainerDied","Data":"5080cc5ff51ef876cebda0dd6972095bb3db5861f03d8743a19d9051a2266fb4"} Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.541573 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.552823 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" podStartSLOduration=3.046876297 podStartE2EDuration="17.552804343s" podCreationTimestamp="2026-02-21 07:03:58 +0000 UTC" firstStartedPulling="2026-02-21 07:03:59.093994781 +0000 UTC m=+1014.127078969" lastFinishedPulling="2026-02-21 07:04:13.599922817 +0000 UTC m=+1028.633007015" observedRunningTime="2026-02-21 07:04:15.548200358 +0000 UTC m=+1030.581284556" watchObservedRunningTime="2026-02-21 07:04:15.552804343 +0000 UTC m=+1030.585888541" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.632601 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-9mgmj"] Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.649491 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-9mgmj"] Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.675364 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-pdmt8"] Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.682249 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-pdmt8"] Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.712486 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25fe0e65-8e41-4f6a-b4ba-499f9ffc6443" path="/var/lib/kubelet/pods/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443/volumes" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.713121 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1d05b01-8b86-4bf7-9b3b-ed179f362f27" path="/var/lib/kubelet/pods/c1d05b01-8b86-4bf7-9b3b-ed179f362f27/volumes" Feb 21 07:04:21 crc kubenswrapper[4820]: I0221 07:04:21.590071 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f99a57a-608b-4678-9be5-abc4347c8bcb","Type":"ContainerStarted","Data":"a01c8152614e99c3561bbc5b953c4aa156aeb30d7be0dbf08d11fcbf1dfa7fff"} Feb 21 07:04:21 crc kubenswrapper[4820]: I0221 07:04:21.590452 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 21 07:04:21 crc kubenswrapper[4820]: I0221 07:04:21.612179 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.118880023 podStartE2EDuration="19.612157773s" podCreationTimestamp="2026-02-21 07:04:02 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.258622917 +0000 UTC m=+1029.291707115" lastFinishedPulling="2026-02-21 07:04:20.751900667 +0000 UTC m=+1035.784984865" observedRunningTime="2026-02-21 07:04:21.608074761 +0000 UTC m=+1036.641158959" watchObservedRunningTime="2026-02-21 07:04:21.612157773 +0000 UTC m=+1036.645241971" Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.598305 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" event={"ID":"924235f7-e875-49cd-b7c1-1cfa96515a97","Type":"ContainerStarted","Data":"8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.598913 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.601606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9" event={"ID":"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7","Type":"ContainerStarted","Data":"baa7cece2ce256578638bf4f6a5bc9638afee7fd94bd34c74a485d35c9ac1293"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.601957 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.603945 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b81af4bd-d2af-4a26-8f4d-a3e612778607","Type":"ContainerStarted","Data":"4841d214c6aeccf3e3adc2843ea15574251aca74a386c5d68c07feac2783f7c1"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.606845 4820 generic.go:334] "Generic (PLEG): container finished" podID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerID="f0e8cd813e640fb93541738f45335efda88900c442e4f6521a72b6bc4a25130d" exitCode=0 Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.606938 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rwsk7" event={"ID":"7880da24-89a6-4428-b9c1-5ffe6647af01","Type":"ContainerDied","Data":"f0e8cd813e640fb93541738f45335efda88900c442e4f6521a72b6bc4a25130d"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.608387 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d","Type":"ContainerStarted","Data":"e5bf8c6230a3cf28cb4d6810d400ab586125f96f1e1d8e1e052c5ad5a57074e9"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.610330 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa49984a-9511-4449-adc6-997899961f73","Type":"ContainerStarted","Data":"946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.614280 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"455bfe0a-a135-4900-8b15-ce584dc8a5bb","Type":"ContainerStarted","Data":"763a6b46ea0010465aaf5a12dc0a5759f78313371c19cbeb4189a6c04b0f99d4"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.619018 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"df55e56a-dbd2-4082-8915-c095d79a0445","Type":"ContainerStarted","Data":"c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.619520 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.625853 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" podStartSLOduration=10.875907662 podStartE2EDuration="25.62582821s" podCreationTimestamp="2026-02-21 07:03:57 +0000 UTC" firstStartedPulling="2026-02-21 07:03:58.83170594 +0000 UTC m=+1013.864790138" lastFinishedPulling="2026-02-21 07:04:13.581626488 +0000 UTC m=+1028.614710686" observedRunningTime="2026-02-21 07:04:22.620757681 +0000 UTC m=+1037.653841889" watchObservedRunningTime="2026-02-21 07:04:22.62582821 +0000 UTC m=+1037.658912408" Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.627446 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c6905da-351a-426d-a36c-0b05dfa993a9","Type":"ContainerStarted","Data":"5198d061e257c6bdda5bc9f71cfa5143331f9afe3dc440aebe7e8c90c90675cf"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.692713 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sfpp9" podStartSLOduration=8.734206698 podStartE2EDuration="15.692696562s" podCreationTimestamp="2026-02-21 07:04:07 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.405063599 +0000 UTC m=+1029.438147797" lastFinishedPulling="2026-02-21 07:04:21.363553463 +0000 UTC m=+1036.396637661" observedRunningTime="2026-02-21 07:04:22.684718165 +0000 UTC m=+1037.717802373" watchObservedRunningTime="2026-02-21 07:04:22.692696562 +0000 UTC m=+1037.725780760" Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.709799 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=11.358194176 podStartE2EDuration="18.709778308s" podCreationTimestamp="2026-02-21 07:04:04 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.116874392 +0000 UTC m=+1029.149958610" lastFinishedPulling="2026-02-21 07:04:21.468458544 +0000 UTC m=+1036.501542742" observedRunningTime="2026-02-21 07:04:22.70509685 +0000 UTC m=+1037.738181048" watchObservedRunningTime="2026-02-21 07:04:22.709778308 +0000 UTC m=+1037.742862506" Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.574866 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.679042 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d6b9fdb89-b9hkw"] Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.680595 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rwsk7" event={"ID":"7880da24-89a6-4428-b9c1-5ffe6647af01","Type":"ContainerStarted","Data":"d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6"} Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.680741 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rwsk7" event={"ID":"7880da24-89a6-4428-b9c1-5ffe6647af01","Type":"ContainerStarted","Data":"355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73"} Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.681202 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.681266 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.709934 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rwsk7" podStartSLOduration=10.287186731 podStartE2EDuration="16.709916337s" podCreationTimestamp="2026-02-21 07:04:07 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.443367804 +0000 UTC m=+1029.476452002" lastFinishedPulling="2026-02-21 07:04:20.86609741 +0000 UTC m=+1035.899181608" observedRunningTime="2026-02-21 07:04:23.703214634 +0000 UTC m=+1038.736298832" watchObservedRunningTime="2026-02-21 07:04:23.709916337 +0000 UTC m=+1038.743000535" Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.718466 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b1242f9-d2ac-493c-bc89-43f7be597a75","Type":"ContainerStarted","Data":"b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012"} Feb 21 07:04:24 crc kubenswrapper[4820]: I0221 07:04:24.725061 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"455bfe0a-a135-4900-8b15-ce584dc8a5bb","Type":"ContainerStarted","Data":"087725d49d3eda013af8b6833f156a663fa05bd1ae58e6cd6c97f96a9a387f5e"} Feb 21 07:04:24 crc kubenswrapper[4820]: I0221 07:04:24.727195 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d","Type":"ContainerStarted","Data":"9b2390a7c05e56db19bda74dfb3d9d4dd876051e208b624fc3be25ba34452030"} Feb 21 07:04:24 crc kubenswrapper[4820]: I0221 07:04:24.727286 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerName="dnsmasq-dns" containerID="cri-o://8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac" gracePeriod=10 Feb 21 07:04:24 crc kubenswrapper[4820]: I0221 07:04:24.751803 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.509104581999999 podStartE2EDuration="17.751787734s" podCreationTimestamp="2026-02-21 07:04:07 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.337180539 +0000 UTC m=+1029.370264737" lastFinishedPulling="2026-02-21 07:04:23.579863691 +0000 UTC m=+1038.612947889" observedRunningTime="2026-02-21 07:04:24.744710371 +0000 UTC m=+1039.777794569" watchObservedRunningTime="2026-02-21 07:04:24.751787734 +0000 UTC m=+1039.784871932" Feb 21 07:04:24 crc kubenswrapper[4820]: I0221 07:04:24.769931 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.687732947 podStartE2EDuration="15.769916328s" podCreationTimestamp="2026-02-21 07:04:09 +0000 UTC" firstStartedPulling="2026-02-21 07:04:15.492673263 +0000 UTC m=+1030.525757461" lastFinishedPulling="2026-02-21 07:04:23.574856644 +0000 UTC m=+1038.607940842" observedRunningTime="2026-02-21 07:04:24.764350186 +0000 UTC m=+1039.797434384" watchObservedRunningTime="2026-02-21 07:04:24.769916328 +0000 UTC m=+1039.803000526" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.270422 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.423050 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kc55\" (UniqueName: \"kubernetes.io/projected/924235f7-e875-49cd-b7c1-1cfa96515a97-kube-api-access-6kc55\") pod \"924235f7-e875-49cd-b7c1-1cfa96515a97\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.423259 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-config\") pod \"924235f7-e875-49cd-b7c1-1cfa96515a97\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.423295 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-dns-svc\") pod \"924235f7-e875-49cd-b7c1-1cfa96515a97\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.428711 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/924235f7-e875-49cd-b7c1-1cfa96515a97-kube-api-access-6kc55" (OuterVolumeSpecName: "kube-api-access-6kc55") pod "924235f7-e875-49cd-b7c1-1cfa96515a97" (UID: "924235f7-e875-49cd-b7c1-1cfa96515a97"). InnerVolumeSpecName "kube-api-access-6kc55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.464841 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "924235f7-e875-49cd-b7c1-1cfa96515a97" (UID: "924235f7-e875-49cd-b7c1-1cfa96515a97"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.482303 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-config" (OuterVolumeSpecName: "config") pod "924235f7-e875-49cd-b7c1-1cfa96515a97" (UID: "924235f7-e875-49cd-b7c1-1cfa96515a97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.525425 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kc55\" (UniqueName: \"kubernetes.io/projected/924235f7-e875-49cd-b7c1-1cfa96515a97-kube-api-access-6kc55\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.525454 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.525463 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.746063 4820 generic.go:334] "Generic (PLEG): container finished" podID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerID="8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac" exitCode=0 Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.746183 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.746262 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" event={"ID":"924235f7-e875-49cd-b7c1-1cfa96515a97","Type":"ContainerDied","Data":"8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac"} Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.746374 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" event={"ID":"924235f7-e875-49cd-b7c1-1cfa96515a97","Type":"ContainerDied","Data":"5efec124cb6c6d1841f29bef1e715c600ce87eea191019eb95128737b62cd64c"} Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.746404 4820 scope.go:117] "RemoveContainer" containerID="8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.748083 4820 generic.go:334] "Generic (PLEG): container finished" podID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerID="4841d214c6aeccf3e3adc2843ea15574251aca74a386c5d68c07feac2783f7c1" exitCode=0 Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.748103 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b81af4bd-d2af-4a26-8f4d-a3e612778607","Type":"ContainerDied","Data":"4841d214c6aeccf3e3adc2843ea15574251aca74a386c5d68c07feac2783f7c1"} Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.828186 4820 scope.go:117] "RemoveContainer" containerID="552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.850622 4820 scope.go:117] "RemoveContainer" containerID="8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac" Feb 21 07:04:25 crc kubenswrapper[4820]: E0221 07:04:25.852481 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac\": container with ID starting with 8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac not found: ID does not exist" containerID="8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.852507 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac"} err="failed to get container status \"8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac\": rpc error: code = NotFound desc = could not find container \"8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac\": container with ID starting with 8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac not found: ID does not exist" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.852527 4820 scope.go:117] "RemoveContainer" containerID="552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6" Feb 21 07:04:25 crc kubenswrapper[4820]: E0221 07:04:25.853083 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6\": container with ID starting with 552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6 not found: ID does not exist" containerID="552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.853100 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6"} err="failed to get container status \"552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6\": rpc error: code = NotFound desc = could not find container \"552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6\": container with ID starting with 552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6 not found: ID does not exist" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.861426 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d6b9fdb89-b9hkw"] Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.868603 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d6b9fdb89-b9hkw"] Feb 21 07:04:26 crc kubenswrapper[4820]: I0221 07:04:26.093860 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:26 crc kubenswrapper[4820]: I0221 07:04:26.094073 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:26 crc kubenswrapper[4820]: I0221 07:04:26.131435 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:26 crc kubenswrapper[4820]: I0221 07:04:26.755986 4820 generic.go:334] "Generic (PLEG): container finished" podID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerID="5198d061e257c6bdda5bc9f71cfa5143331f9afe3dc440aebe7e8c90c90675cf" exitCode=0 Feb 21 07:04:26 crc kubenswrapper[4820]: I0221 07:04:26.756046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c6905da-351a-426d-a36c-0b05dfa993a9","Type":"ContainerDied","Data":"5198d061e257c6bdda5bc9f71cfa5143331f9afe3dc440aebe7e8c90c90675cf"} Feb 21 07:04:26 crc kubenswrapper[4820]: I0221 07:04:26.759963 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b81af4bd-d2af-4a26-8f4d-a3e612778607","Type":"ContainerStarted","Data":"437b9754b509c1466ba129e34883f39fc42e43b2b7d6fb57366f35e57d0c3b25"} Feb 21 07:04:26 crc kubenswrapper[4820]: I0221 07:04:26.806471 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.352523127 podStartE2EDuration="25.806432033s" podCreationTimestamp="2026-02-21 07:04:01 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.412712588 +0000 UTC m=+1029.445796786" lastFinishedPulling="2026-02-21 07:04:20.866621504 +0000 UTC m=+1035.899705692" observedRunningTime="2026-02-21 07:04:26.798001323 +0000 UTC m=+1041.831085541" watchObservedRunningTime="2026-02-21 07:04:26.806432033 +0000 UTC m=+1041.839516231" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.288797 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.321214 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.519537 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.710729 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" path="/var/lib/kubelet/pods/924235f7-e875-49cd-b7c1-1cfa96515a97/volumes" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.770998 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c6905da-351a-426d-a36c-0b05dfa993a9","Type":"ContainerStarted","Data":"8ea9d572727a93891412c9eefb51f0b89a90a953470d2aea7e3c780c0bab4fc7"} Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.771299 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.798461 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.560146597 podStartE2EDuration="28.798442991s" podCreationTimestamp="2026-02-21 07:03:59 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.123114522 +0000 UTC m=+1029.156198730" lastFinishedPulling="2026-02-21 07:04:21.361410926 +0000 UTC m=+1036.394495124" observedRunningTime="2026-02-21 07:04:27.788409968 +0000 UTC m=+1042.821494166" watchObservedRunningTime="2026-02-21 07:04:27.798442991 +0000 UTC m=+1042.831527189" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.805972 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.812593 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.968263 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-rrbhp"] Feb 21 07:04:27 crc kubenswrapper[4820]: E0221 07:04:27.968635 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerName="dnsmasq-dns" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.968648 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerName="dnsmasq-dns" Feb 21 07:04:27 crc kubenswrapper[4820]: E0221 07:04:27.968663 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerName="init" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.968669 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerName="init" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.968835 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerName="dnsmasq-dns" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.969859 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.973334 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.988746 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-rrbhp"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.008342 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-p2v97"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.009345 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.012193 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.023792 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-p2v97"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.084646 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-config\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.084702 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-dns-svc\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.084763 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.084792 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phbmt\" (UniqueName: \"kubernetes.io/projected/acd01fc7-7058-41a4-b8f6-7d5cb3626330-kube-api-access-phbmt\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.097836 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-rrbhp"] Feb 21 07:04:28 crc kubenswrapper[4820]: E0221 07:04:28.098411 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-phbmt ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" podUID="acd01fc7-7058-41a4-b8f6-7d5cb3626330" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.123262 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-frqzv"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.124465 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.127132 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.136161 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-frqzv"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.186317 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovs-rundir\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.186632 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.186772 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.186884 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phbmt\" (UniqueName: \"kubernetes.io/projected/acd01fc7-7058-41a4-b8f6-7d5cb3626330-kube-api-access-phbmt\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.186992 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovn-rundir\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.187160 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-combined-ca-bundle\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.187305 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d07086-c2e8-4351-bac8-b99c485826c4-config\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.187548 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7xps\" (UniqueName: \"kubernetes.io/projected/96d07086-c2e8-4351-bac8-b99c485826c4-kube-api-access-m7xps\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.187722 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-config\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.187857 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-dns-svc\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.187973 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.188899 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-dns-svc\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.190882 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-config\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.204938 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.219584 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.228043 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.228282 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.228989 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.229168 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-45s9c" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.240990 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.272904 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phbmt\" (UniqueName: \"kubernetes.io/projected/acd01fc7-7058-41a4-b8f6-7d5cb3626330-kube-api-access-phbmt\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.289667 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovs-rundir\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.289914 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.289942 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdpn4\" (UniqueName: \"kubernetes.io/projected/44c30e7c-2c39-4e47-a120-d3da3367497e-kube-api-access-vdpn4\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.289957 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.289975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.290075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovn-rundir\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.290134 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-config\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.290186 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-combined-ca-bundle\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.290227 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d07086-c2e8-4351-bac8-b99c485826c4-config\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.290291 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.290335 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xps\" (UniqueName: \"kubernetes.io/projected/96d07086-c2e8-4351-bac8-b99c485826c4-kube-api-access-m7xps\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.291537 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovs-rundir\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.291699 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovn-rundir\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.292449 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d07086-c2e8-4351-bac8-b99c485826c4-config\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.295106 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.296267 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-combined-ca-bundle\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.305444 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xps\" (UniqueName: \"kubernetes.io/projected/96d07086-c2e8-4351-bac8-b99c485826c4-kube-api-access-m7xps\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.332830 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.391599 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.391674 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.391696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgjgb\" (UniqueName: \"kubernetes.io/projected/a5b71e95-fe49-48b2-8d7b-575e17855d52-kube-api-access-zgjgb\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.391713 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-config\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.391739 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.391982 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-scripts\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.392031 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.392063 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdpn4\" (UniqueName: \"kubernetes.io/projected/44c30e7c-2c39-4e47-a120-d3da3367497e-kube-api-access-vdpn4\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.392081 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.392124 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.392141 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.392187 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-config\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.392712 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.393046 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-config\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.393327 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.393900 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.411424 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdpn4\" (UniqueName: \"kubernetes.io/projected/44c30e7c-2c39-4e47-a120-d3da3367497e-kube-api-access-vdpn4\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.441326 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.494097 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgjgb\" (UniqueName: \"kubernetes.io/projected/a5b71e95-fe49-48b2-8d7b-575e17855d52-kube-api-access-zgjgb\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.494396 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-config\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.494435 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.494504 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-scripts\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.494550 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.494569 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.494629 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.498183 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-scripts\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.504704 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.504925 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-config\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.521058 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.521095 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.524694 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.530952 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgjgb\" (UniqueName: \"kubernetes.io/projected/a5b71e95-fe49-48b2-8d7b-575e17855d52-kube-api-access-zgjgb\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.551459 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.783774 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.797649 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.861958 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-p2v97"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.902695 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-ovsdbserver-nb\") pod \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.902777 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-config\") pod \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.902850 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phbmt\" (UniqueName: \"kubernetes.io/projected/acd01fc7-7058-41a4-b8f6-7d5cb3626330-kube-api-access-phbmt\") pod \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.902920 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-dns-svc\") pod \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.903188 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "acd01fc7-7058-41a4-b8f6-7d5cb3626330" (UID: "acd01fc7-7058-41a4-b8f6-7d5cb3626330"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.903491 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.904336 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "acd01fc7-7058-41a4-b8f6-7d5cb3626330" (UID: "acd01fc7-7058-41a4-b8f6-7d5cb3626330"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.905173 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-config" (OuterVolumeSpecName: "config") pod "acd01fc7-7058-41a4-b8f6-7d5cb3626330" (UID: "acd01fc7-7058-41a4-b8f6-7d5cb3626330"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.912942 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd01fc7-7058-41a4-b8f6-7d5cb3626330-kube-api-access-phbmt" (OuterVolumeSpecName: "kube-api-access-phbmt") pod "acd01fc7-7058-41a4-b8f6-7d5cb3626330" (UID: "acd01fc7-7058-41a4-b8f6-7d5cb3626330"). InnerVolumeSpecName "kube-api-access-phbmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.947494 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-frqzv"] Feb 21 07:04:28 crc kubenswrapper[4820]: W0221 07:04:28.964545 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44c30e7c_2c39_4e47_a120_d3da3367497e.slice/crio-5dfed0812d269a0a46b9b6e9bd46f39556a88550006da2d78bb689af4e13c33f WatchSource:0}: Error finding container 5dfed0812d269a0a46b9b6e9bd46f39556a88550006da2d78bb689af4e13c33f: Status 404 returned error can't find the container with id 5dfed0812d269a0a46b9b6e9bd46f39556a88550006da2d78bb689af4e13c33f Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.005204 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phbmt\" (UniqueName: \"kubernetes.io/projected/acd01fc7-7058-41a4-b8f6-7d5cb3626330-kube-api-access-phbmt\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.005606 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.005616 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.033270 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 21 07:04:29 crc kubenswrapper[4820]: W0221 07:04:29.033645 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5b71e95_fe49_48b2_8d7b_575e17855d52.slice/crio-604dd0f90d347bd1d64b0d2191df0d507c4aabc32e0be6179ae2446497d41fb2 WatchSource:0}: Error finding container 604dd0f90d347bd1d64b0d2191df0d507c4aabc32e0be6179ae2446497d41fb2: Status 404 returned error can't find the container with id 604dd0f90d347bd1d64b0d2191df0d507c4aabc32e0be6179ae2446497d41fb2 Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.790871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" event={"ID":"44c30e7c-2c39-4e47-a120-d3da3367497e","Type":"ContainerStarted","Data":"5dfed0812d269a0a46b9b6e9bd46f39556a88550006da2d78bb689af4e13c33f"} Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.791816 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a5b71e95-fe49-48b2-8d7b-575e17855d52","Type":"ContainerStarted","Data":"604dd0f90d347bd1d64b0d2191df0d507c4aabc32e0be6179ae2446497d41fb2"} Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.792651 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p2v97" event={"ID":"96d07086-c2e8-4351-bac8-b99c485826c4","Type":"ContainerStarted","Data":"7d34608592e5bad3ce2cdbb838b7f2d91070fccc15c351f0f966dcae95c21a16"} Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.792714 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.831100 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-rrbhp"] Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.837106 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-rrbhp"] Feb 21 07:04:30 crc kubenswrapper[4820]: I0221 07:04:30.962551 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 21 07:04:30 crc kubenswrapper[4820]: I0221 07:04:30.962945 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 21 07:04:31 crc kubenswrapper[4820]: I0221 07:04:31.706119 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd01fc7-7058-41a4-b8f6-7d5cb3626330" path="/var/lib/kubelet/pods/acd01fc7-7058-41a4-b8f6-7d5cb3626330/volumes" Feb 21 07:04:32 crc kubenswrapper[4820]: I0221 07:04:32.413227 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:32 crc kubenswrapper[4820]: I0221 07:04:32.413291 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.686787 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-frqzv"] Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.721041 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.747476 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-mhcgl"] Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.748802 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.778413 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-mhcgl"] Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.913377 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf6x6\" (UniqueName: \"kubernetes.io/projected/97c27e55-f0a0-4253-b573-21c027992fe7-kube-api-access-sf6x6\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.913429 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.913465 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-dns-svc\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.913499 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-config\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.913522 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.014833 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf6x6\" (UniqueName: \"kubernetes.io/projected/97c27e55-f0a0-4253-b573-21c027992fe7-kube-api-access-sf6x6\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.014906 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.014954 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-dns-svc\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.014998 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-config\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.015029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.016062 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.016092 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.016144 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-config\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.016176 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-dns-svc\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.033376 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf6x6\" (UniqueName: \"kubernetes.io/projected/97c27e55-f0a0-4253-b573-21c027992fe7-kube-api-access-sf6x6\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.081778 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.542265 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-mhcgl"] Feb 21 07:04:35 crc kubenswrapper[4820]: W0221 07:04:35.557574 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97c27e55_f0a0_4253_b573_21c027992fe7.slice/crio-86f086e1554176bb192e9a0f40187bc917685a90d4baa1f41b7eedcf9aeba502 WatchSource:0}: Error finding container 86f086e1554176bb192e9a0f40187bc917685a90d4baa1f41b7eedcf9aeba502: Status 404 returned error can't find the container with id 86f086e1554176bb192e9a0f40187bc917685a90d4baa1f41b7eedcf9aeba502 Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.840778 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p2v97" event={"ID":"96d07086-c2e8-4351-bac8-b99c485826c4","Type":"ContainerStarted","Data":"4674ea514756bc9a67ce3b0d32627dbc628c1c0dbddfbaace5ee5ef4c003c5ce"} Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.841709 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" event={"ID":"97c27e55-f0a0-4253-b573-21c027992fe7","Type":"ContainerStarted","Data":"86f086e1554176bb192e9a0f40187bc917685a90d4baa1f41b7eedcf9aeba502"} Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.922118 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.927870 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.930407 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-pfbp5" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.930699 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.930808 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.932136 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.945720 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.030428 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.030507 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2200daa-1861-49f4-965a-68417ec65542-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.030569 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pmsc\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-kube-api-access-2pmsc\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.030593 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-lock\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.030642 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.030672 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-cache\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.131969 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.132058 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2200daa-1861-49f4-965a-68417ec65542-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.132113 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pmsc\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-kube-api-access-2pmsc\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.132145 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-lock\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: E0221 07:04:36.132156 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 21 07:04:36 crc kubenswrapper[4820]: E0221 07:04:36.132189 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 21 07:04:36 crc kubenswrapper[4820]: E0221 07:04:36.132303 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift podName:b2200daa-1861-49f4-965a-68417ec65542 nodeName:}" failed. No retries permitted until 2026-02-21 07:04:36.632222333 +0000 UTC m=+1051.665306531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift") pod "swift-storage-0" (UID: "b2200daa-1861-49f4-965a-68417ec65542") : configmap "swift-ring-files" not found Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.132479 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.133226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-lock\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.132175 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.133326 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-cache\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.133851 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-cache\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.147601 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2200daa-1861-49f4-965a-68417ec65542-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.156171 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pmsc\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-kube-api-access-2pmsc\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.186165 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.639858 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: E0221 07:04:36.640094 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 21 07:04:36 crc kubenswrapper[4820]: E0221 07:04:36.640124 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 21 07:04:36 crc kubenswrapper[4820]: E0221 07:04:36.640185 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift podName:b2200daa-1861-49f4-965a-68417ec65542 nodeName:}" failed. No retries permitted until 2026-02-21 07:04:37.640166662 +0000 UTC m=+1052.673250860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift") pod "swift-storage-0" (UID: "b2200daa-1861-49f4-965a-68417ec65542") : configmap "swift-ring-files" not found Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.859143 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a5b71e95-fe49-48b2-8d7b-575e17855d52","Type":"ContainerStarted","Data":"803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30"} Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.860678 4820 generic.go:334] "Generic (PLEG): container finished" podID="97c27e55-f0a0-4253-b573-21c027992fe7" containerID="b23b69dde5d8d2db7290e326e8c103f21a46fecab91f2fe5987461b750aca0cf" exitCode=0 Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.860757 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" event={"ID":"97c27e55-f0a0-4253-b573-21c027992fe7","Type":"ContainerDied","Data":"b23b69dde5d8d2db7290e326e8c103f21a46fecab91f2fe5987461b750aca0cf"} Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.862210 4820 generic.go:334] "Generic (PLEG): container finished" podID="44c30e7c-2c39-4e47-a120-d3da3367497e" containerID="bd28e2bf44f948e4e1770e722011315ebd1975ff95368e5558e96ac6107ba233" exitCode=0 Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.862283 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" event={"ID":"44c30e7c-2c39-4e47-a120-d3da3367497e","Type":"ContainerDied","Data":"bd28e2bf44f948e4e1770e722011315ebd1975ff95368e5558e96ac6107ba233"} Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.914643 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-p2v97" podStartSLOduration=9.914625675 podStartE2EDuration="9.914625675s" podCreationTimestamp="2026-02-21 07:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:04:36.913081463 +0000 UTC m=+1051.946165671" watchObservedRunningTime="2026-02-21 07:04:36.914625675 +0000 UTC m=+1051.947709873" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.270634 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.360469 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-sb\") pod \"44c30e7c-2c39-4e47-a120-d3da3367497e\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.360528 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-nb\") pod \"44c30e7c-2c39-4e47-a120-d3da3367497e\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.360610 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-config\") pod \"44c30e7c-2c39-4e47-a120-d3da3367497e\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.360666 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-dns-svc\") pod \"44c30e7c-2c39-4e47-a120-d3da3367497e\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.360740 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdpn4\" (UniqueName: \"kubernetes.io/projected/44c30e7c-2c39-4e47-a120-d3da3367497e-kube-api-access-vdpn4\") pod \"44c30e7c-2c39-4e47-a120-d3da3367497e\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.365354 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44c30e7c-2c39-4e47-a120-d3da3367497e-kube-api-access-vdpn4" (OuterVolumeSpecName: "kube-api-access-vdpn4") pod "44c30e7c-2c39-4e47-a120-d3da3367497e" (UID: "44c30e7c-2c39-4e47-a120-d3da3367497e"). InnerVolumeSpecName "kube-api-access-vdpn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.379686 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-config" (OuterVolumeSpecName: "config") pod "44c30e7c-2c39-4e47-a120-d3da3367497e" (UID: "44c30e7c-2c39-4e47-a120-d3da3367497e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.381752 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44c30e7c-2c39-4e47-a120-d3da3367497e" (UID: "44c30e7c-2c39-4e47-a120-d3da3367497e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.383356 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44c30e7c-2c39-4e47-a120-d3da3367497e" (UID: "44c30e7c-2c39-4e47-a120-d3da3367497e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.400766 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44c30e7c-2c39-4e47-a120-d3da3367497e" (UID: "44c30e7c-2c39-4e47-a120-d3da3367497e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.463031 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.463353 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdpn4\" (UniqueName: \"kubernetes.io/projected/44c30e7c-2c39-4e47-a120-d3da3367497e-kube-api-access-vdpn4\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.463456 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.463534 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.463607 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.666651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:37 crc kubenswrapper[4820]: E0221 07:04:37.666867 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 21 07:04:37 crc kubenswrapper[4820]: E0221 07:04:37.666888 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 21 07:04:37 crc kubenswrapper[4820]: E0221 07:04:37.666944 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift podName:b2200daa-1861-49f4-965a-68417ec65542 nodeName:}" failed. No retries permitted until 2026-02-21 07:04:39.666926657 +0000 UTC m=+1054.700010865 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift") pod "swift-storage-0" (UID: "b2200daa-1861-49f4-965a-68417ec65542") : configmap "swift-ring-files" not found Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.871258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" event={"ID":"44c30e7c-2c39-4e47-a120-d3da3367497e","Type":"ContainerDied","Data":"5dfed0812d269a0a46b9b6e9bd46f39556a88550006da2d78bb689af4e13c33f"} Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.871267 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.871323 4820 scope.go:117] "RemoveContainer" containerID="bd28e2bf44f948e4e1770e722011315ebd1975ff95368e5558e96ac6107ba233" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.875100 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a5b71e95-fe49-48b2-8d7b-575e17855d52","Type":"ContainerStarted","Data":"0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d"} Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.875392 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.877159 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" event={"ID":"97c27e55-f0a0-4253-b573-21c027992fe7","Type":"ContainerStarted","Data":"768c0701e8f8f7783ec7add20fa58d3a392d65a4a41a9f5f3a7c5d275fa45505"} Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.877459 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.925826 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-frqzv"] Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.934286 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.297617092 podStartE2EDuration="9.934266466s" podCreationTimestamp="2026-02-21 07:04:28 +0000 UTC" firstStartedPulling="2026-02-21 07:04:29.036578509 +0000 UTC m=+1044.069662707" lastFinishedPulling="2026-02-21 07:04:36.673227883 +0000 UTC m=+1051.706312081" observedRunningTime="2026-02-21 07:04:37.919169234 +0000 UTC m=+1052.952253442" watchObservedRunningTime="2026-02-21 07:04:37.934266466 +0000 UTC m=+1052.967350684" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.934335 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-frqzv"] Feb 21 07:04:37 crc kubenswrapper[4820]: E0221 07:04:37.936519 4820 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.201:35416->38.102.83.201:43255: write tcp 38.102.83.201:35416->38.102.83.201:43255: write: broken pipe Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.944299 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" podStartSLOduration=3.944282029 podStartE2EDuration="3.944282029s" podCreationTimestamp="2026-02-21 07:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:04:37.937320279 +0000 UTC m=+1052.970404487" watchObservedRunningTime="2026-02-21 07:04:37.944282029 +0000 UTC m=+1052.977366227" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.410900 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.486037 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.695854 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:39 crc kubenswrapper[4820]: E0221 07:04:39.696162 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 21 07:04:39 crc kubenswrapper[4820]: E0221 07:04:39.696196 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 21 07:04:39 crc kubenswrapper[4820]: E0221 07:04:39.696274 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift podName:b2200daa-1861-49f4-965a-68417ec65542 nodeName:}" failed. No retries permitted until 2026-02-21 07:04:43.696253576 +0000 UTC m=+1058.729337774 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift") pod "swift-storage-0" (UID: "b2200daa-1861-49f4-965a-68417ec65542") : configmap "swift-ring-files" not found Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.712733 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44c30e7c-2c39-4e47-a120-d3da3367497e" path="/var/lib/kubelet/pods/44c30e7c-2c39-4e47-a120-d3da3367497e/volumes" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.785520 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-rf689"] Feb 21 07:04:39 crc kubenswrapper[4820]: E0221 07:04:39.786285 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c30e7c-2c39-4e47-a120-d3da3367497e" containerName="init" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.786314 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c30e7c-2c39-4e47-a120-d3da3367497e" containerName="init" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.786598 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c30e7c-2c39-4e47-a120-d3da3367497e" containerName="init" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.787585 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.790530 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.790698 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.791283 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.795497 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rf689"] Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.921651 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-scripts\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.921728 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-dispersionconf\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.921758 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4crq8\" (UniqueName: \"kubernetes.io/projected/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-kube-api-access-4crq8\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.921866 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-etc-swift\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.921912 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-combined-ca-bundle\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.922002 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-swiftconf\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.922028 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-ring-data-devices\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.023387 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-scripts\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.023439 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-dispersionconf\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.023503 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4crq8\" (UniqueName: \"kubernetes.io/projected/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-kube-api-access-4crq8\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.023546 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-etc-swift\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.023569 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-combined-ca-bundle\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.023602 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-swiftconf\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.023624 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-ring-data-devices\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.025597 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-scripts\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.030524 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-etc-swift\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.033206 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-ring-data-devices\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.036699 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-combined-ca-bundle\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.042690 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-swiftconf\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.044693 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-dispersionconf\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.047952 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4crq8\" (UniqueName: \"kubernetes.io/projected/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-kube-api-access-4crq8\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.137654 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.562221 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rf689"] Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.901390 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rf689" event={"ID":"3f798ecc-7cdf-4b7b-b8c9-0754d3391676","Type":"ContainerStarted","Data":"c26d73d13c8ed1f73935a923bee354cfe61457ba1d8a1c7f380f8b963015bff4"} Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.070452 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.161300 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.164081 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mxq6b"] Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.165450 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.169706 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.173082 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mxq6b"] Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.244956 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-operator-scripts\") pod \"root-account-create-update-mxq6b\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.245056 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrllv\" (UniqueName: \"kubernetes.io/projected/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-kube-api-access-jrllv\") pod \"root-account-create-update-mxq6b\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.346772 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-operator-scripts\") pod \"root-account-create-update-mxq6b\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.346814 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrllv\" (UniqueName: \"kubernetes.io/projected/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-kube-api-access-jrllv\") pod \"root-account-create-update-mxq6b\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.347805 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-operator-scripts\") pod \"root-account-create-update-mxq6b\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.390134 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrllv\" (UniqueName: \"kubernetes.io/projected/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-kube-api-access-jrllv\") pod \"root-account-create-update-mxq6b\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.485684 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.941989 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mxq6b"] Feb 21 07:04:41 crc kubenswrapper[4820]: W0221 07:04:41.951406 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a5ba110_ecad_46c8_8fc2_5dc5b3efaa21.slice/crio-81ef6f82a94f1931f57338ccb0b4c2171277f43147f4b04a97263530b4aeab77 WatchSource:0}: Error finding container 81ef6f82a94f1931f57338ccb0b4c2171277f43147f4b04a97263530b4aeab77: Status 404 returned error can't find the container with id 81ef6f82a94f1931f57338ccb0b4c2171277f43147f4b04a97263530b4aeab77 Feb 21 07:04:42 crc kubenswrapper[4820]: I0221 07:04:42.927185 4820 generic.go:334] "Generic (PLEG): container finished" podID="5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21" containerID="6e780104fae380320d0ded6249999a3a1b8e347ec62150e353a945acffed1e2c" exitCode=0 Feb 21 07:04:42 crc kubenswrapper[4820]: I0221 07:04:42.927310 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mxq6b" event={"ID":"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21","Type":"ContainerDied","Data":"6e780104fae380320d0ded6249999a3a1b8e347ec62150e353a945acffed1e2c"} Feb 21 07:04:42 crc kubenswrapper[4820]: I0221 07:04:42.927617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mxq6b" event={"ID":"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21","Type":"ContainerStarted","Data":"81ef6f82a94f1931f57338ccb0b4c2171277f43147f4b04a97263530b4aeab77"} Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.624257 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6cfkd"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.626493 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.630852 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6cfkd"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.700708 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-operator-scripts\") pod \"keystone-db-create-6cfkd\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.701212 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.701340 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t9wr\" (UniqueName: \"kubernetes.io/projected/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-kube-api-access-5t9wr\") pod \"keystone-db-create-6cfkd\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:43 crc kubenswrapper[4820]: E0221 07:04:43.701516 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 21 07:04:43 crc kubenswrapper[4820]: E0221 07:04:43.701555 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 21 07:04:43 crc kubenswrapper[4820]: E0221 07:04:43.701773 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift podName:b2200daa-1861-49f4-965a-68417ec65542 nodeName:}" failed. No retries permitted until 2026-02-21 07:04:51.701751617 +0000 UTC m=+1066.734835815 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift") pod "swift-storage-0" (UID: "b2200daa-1861-49f4-965a-68417ec65542") : configmap "swift-ring-files" not found Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.740296 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b298-account-create-update-wh2wv"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.741364 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.745373 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.772678 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b298-account-create-update-wh2wv"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.803179 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t9wr\" (UniqueName: \"kubernetes.io/projected/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-kube-api-access-5t9wr\") pod \"keystone-db-create-6cfkd\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.803295 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-operator-scripts\") pod \"keystone-db-create-6cfkd\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.805398 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-operator-scripts\") pod \"keystone-db-create-6cfkd\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.818745 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.818828 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.818878 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.819677 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0f682e2000efd774d622a3e32ffb0bf77aef757862422932fda82c7a3e96b5c"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.819738 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://a0f682e2000efd774d622a3e32ffb0bf77aef757862422932fda82c7a3e96b5c" gracePeriod=600 Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.848138 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t9wr\" (UniqueName: \"kubernetes.io/projected/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-kube-api-access-5t9wr\") pod \"keystone-db-create-6cfkd\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.883990 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-j8m4b"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.885575 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.905759 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-j8m4b"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.906863 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d781b010-be2e-465d-9789-d6188ac5a30e-operator-scripts\") pod \"keystone-b298-account-create-update-wh2wv\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.906914 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2gps\" (UniqueName: \"kubernetes.io/projected/d781b010-be2e-465d-9789-d6188ac5a30e-kube-api-access-d2gps\") pod \"keystone-b298-account-create-update-wh2wv\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.963536 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c8ba-account-create-update-wmp66"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.964949 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.967807 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.970808 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c8ba-account-create-update-wmp66"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.980589 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.008469 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2gps\" (UniqueName: \"kubernetes.io/projected/d781b010-be2e-465d-9789-d6188ac5a30e-kube-api-access-d2gps\") pod \"keystone-b298-account-create-update-wh2wv\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.008552 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf044875-b3ef-48f5-b802-1bd167de5685-operator-scripts\") pod \"placement-db-create-j8m4b\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.009088 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjxcx\" (UniqueName: \"kubernetes.io/projected/cf044875-b3ef-48f5-b802-1bd167de5685-kube-api-access-jjxcx\") pod \"placement-db-create-j8m4b\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.009271 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d781b010-be2e-465d-9789-d6188ac5a30e-operator-scripts\") pod \"keystone-b298-account-create-update-wh2wv\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.010583 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d781b010-be2e-465d-9789-d6188ac5a30e-operator-scripts\") pod \"keystone-b298-account-create-update-wh2wv\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.034869 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2gps\" (UniqueName: \"kubernetes.io/projected/d781b010-be2e-465d-9789-d6188ac5a30e-kube-api-access-d2gps\") pod \"keystone-b298-account-create-update-wh2wv\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.059611 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.110949 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf044875-b3ef-48f5-b802-1bd167de5685-operator-scripts\") pod \"placement-db-create-j8m4b\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.111046 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kv25\" (UniqueName: \"kubernetes.io/projected/b290d702-774e-48b8-a243-5a9c648740a7-kube-api-access-2kv25\") pod \"placement-c8ba-account-create-update-wmp66\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.111142 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b290d702-774e-48b8-a243-5a9c648740a7-operator-scripts\") pod \"placement-c8ba-account-create-update-wmp66\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.111168 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjxcx\" (UniqueName: \"kubernetes.io/projected/cf044875-b3ef-48f5-b802-1bd167de5685-kube-api-access-jjxcx\") pod \"placement-db-create-j8m4b\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.112816 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf044875-b3ef-48f5-b802-1bd167de5685-operator-scripts\") pod \"placement-db-create-j8m4b\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.130541 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjxcx\" (UniqueName: \"kubernetes.io/projected/cf044875-b3ef-48f5-b802-1bd167de5685-kube-api-access-jjxcx\") pod \"placement-db-create-j8m4b\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.213238 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kv25\" (UniqueName: \"kubernetes.io/projected/b290d702-774e-48b8-a243-5a9c648740a7-kube-api-access-2kv25\") pod \"placement-c8ba-account-create-update-wmp66\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.213327 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b290d702-774e-48b8-a243-5a9c648740a7-operator-scripts\") pod \"placement-c8ba-account-create-update-wmp66\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.214077 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b290d702-774e-48b8-a243-5a9c648740a7-operator-scripts\") pod \"placement-c8ba-account-create-update-wmp66\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.229291 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kv25\" (UniqueName: \"kubernetes.io/projected/b290d702-774e-48b8-a243-5a9c648740a7-kube-api-access-2kv25\") pod \"placement-c8ba-account-create-update-wmp66\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.234474 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.290519 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.963083 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="a0f682e2000efd774d622a3e32ffb0bf77aef757862422932fda82c7a3e96b5c" exitCode=0 Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.963128 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"a0f682e2000efd774d622a3e32ffb0bf77aef757862422932fda82c7a3e96b5c"} Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.963203 4820 scope.go:117] "RemoveContainer" containerID="71784da7c98d1c6a1f3631b050c692e6a08e77f49190060892784c827a17df19" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.114779 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.122222 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.203510 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-47ln4"] Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.203994 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" podUID="85621024-c5dd-4598-817a-62024db91c1d" containerName="dnsmasq-dns" containerID="cri-o://ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad" gracePeriod=10 Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.231749 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-operator-scripts\") pod \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.231822 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrllv\" (UniqueName: \"kubernetes.io/projected/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-kube-api-access-jrllv\") pod \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.232901 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21" (UID: "5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.233782 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.245648 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-kube-api-access-jrllv" (OuterVolumeSpecName: "kube-api-access-jrllv") pod "5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21" (UID: "5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21"). InnerVolumeSpecName "kube-api-access-jrllv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.339680 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrllv\" (UniqueName: \"kubernetes.io/projected/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-kube-api-access-jrllv\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.541820 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c8ba-account-create-update-wmp66"] Feb 21 07:04:45 crc kubenswrapper[4820]: W0221 07:04:45.544184 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb290d702_774e_48b8_a243_5a9c648740a7.slice/crio-6e6d73f6df79b75ff6c574af7f852d64ccd80a310be4316368c220990db295a7 WatchSource:0}: Error finding container 6e6d73f6df79b75ff6c574af7f852d64ccd80a310be4316368c220990db295a7: Status 404 returned error can't find the container with id 6e6d73f6df79b75ff6c574af7f852d64ccd80a310be4316368c220990db295a7 Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.606542 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6cfkd"] Feb 21 07:04:45 crc kubenswrapper[4820]: W0221 07:04:45.607744 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8377d0c3_40a1_4a4a_b6c8_67f66dfa602d.slice/crio-c3715fd2e582aad1ca2d36ee76fef77035e2827c6d6984f0a9fbebf8093fb91c WatchSource:0}: Error finding container c3715fd2e582aad1ca2d36ee76fef77035e2827c6d6984f0a9fbebf8093fb91c: Status 404 returned error can't find the container with id c3715fd2e582aad1ca2d36ee76fef77035e2827c6d6984f0a9fbebf8093fb91c Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.629944 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-j8m4b"] Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.764017 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b298-account-create-update-wh2wv"] Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.807306 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.950812 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-dns-svc\") pod \"85621024-c5dd-4598-817a-62024db91c1d\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.950938 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hjs7\" (UniqueName: \"kubernetes.io/projected/85621024-c5dd-4598-817a-62024db91c1d-kube-api-access-2hjs7\") pod \"85621024-c5dd-4598-817a-62024db91c1d\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.951008 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-config\") pod \"85621024-c5dd-4598-817a-62024db91c1d\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.957577 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85621024-c5dd-4598-817a-62024db91c1d-kube-api-access-2hjs7" (OuterVolumeSpecName: "kube-api-access-2hjs7") pod "85621024-c5dd-4598-817a-62024db91c1d" (UID: "85621024-c5dd-4598-817a-62024db91c1d"). InnerVolumeSpecName "kube-api-access-2hjs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.975286 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8ba-account-create-update-wmp66" event={"ID":"b290d702-774e-48b8-a243-5a9c648740a7","Type":"ContainerStarted","Data":"3d73b26b5221cdf8b2f3495526d1e7baef6e58d18c45f1b76e76efd304e84f0f"} Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.975331 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8ba-account-create-update-wmp66" event={"ID":"b290d702-774e-48b8-a243-5a9c648740a7","Type":"ContainerStarted","Data":"6e6d73f6df79b75ff6c574af7f852d64ccd80a310be4316368c220990db295a7"} Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.982148 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6cfkd" event={"ID":"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d","Type":"ContainerStarted","Data":"ff0159151c6f141c22cffbaa81dad0f0b8a12039ef73dc3cf246a84b8885a789"} Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.982189 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6cfkd" event={"ID":"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d","Type":"ContainerStarted","Data":"c3715fd2e582aad1ca2d36ee76fef77035e2827c6d6984f0a9fbebf8093fb91c"} Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.988521 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-config" (OuterVolumeSpecName: "config") pod "85621024-c5dd-4598-817a-62024db91c1d" (UID: "85621024-c5dd-4598-817a-62024db91c1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.990894 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.991608 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mxq6b" event={"ID":"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21","Type":"ContainerDied","Data":"81ef6f82a94f1931f57338ccb0b4c2171277f43147f4b04a97263530b4aeab77"} Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.991636 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81ef6f82a94f1931f57338ccb0b4c2171277f43147f4b04a97263530b4aeab77" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.998199 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rf689" event={"ID":"3f798ecc-7cdf-4b7b-b8c9-0754d3391676","Type":"ContainerStarted","Data":"fefa9ef65a27a95fd0fbfd9f605222ae2b400c17ddf7734534b5e86974696a63"} Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.999418 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c8ba-account-create-update-wmp66" podStartSLOduration=2.999395452 podStartE2EDuration="2.999395452s" podCreationTimestamp="2026-02-21 07:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:04:45.993486462 +0000 UTC m=+1061.026570660" watchObservedRunningTime="2026-02-21 07:04:45.999395452 +0000 UTC m=+1061.032479650" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.004333 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j8m4b" event={"ID":"cf044875-b3ef-48f5-b802-1bd167de5685","Type":"ContainerStarted","Data":"51679703ae2158b53bc0911e57a3e4d6e461f24e956bb1ea7408f2cb69b87ef1"} Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.004450 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j8m4b" event={"ID":"cf044875-b3ef-48f5-b802-1bd167de5685","Type":"ContainerStarted","Data":"2fb3997f67c3fc260d305425e7a58e7f1b3efb875f6d7e2dd0a4d15317a90b89"} Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.009099 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b298-account-create-update-wh2wv" event={"ID":"d781b010-be2e-465d-9789-d6188ac5a30e","Type":"ContainerStarted","Data":"77ef8fafad5e6b7303c2ab29a54ec70cbb2ea080725bfabd09344c5407b83c16"} Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.009150 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b298-account-create-update-wh2wv" event={"ID":"d781b010-be2e-465d-9789-d6188ac5a30e","Type":"ContainerStarted","Data":"ce8b546c66c977997ef40cbbd237c00f88b1d5c8de3f9b7919f873c4bd98119c"} Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.013381 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "85621024-c5dd-4598-817a-62024db91c1d" (UID: "85621024-c5dd-4598-817a-62024db91c1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.015741 4820 generic.go:334] "Generic (PLEG): container finished" podID="85621024-c5dd-4598-817a-62024db91c1d" containerID="ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad" exitCode=0 Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.015789 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" event={"ID":"85621024-c5dd-4598-817a-62024db91c1d","Type":"ContainerDied","Data":"ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad"} Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.015810 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" event={"ID":"85621024-c5dd-4598-817a-62024db91c1d","Type":"ContainerDied","Data":"89ede790e040e0e9c21f3a91218ea509876d44fee835aa305d75785ff546742f"} Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.015828 4820 scope.go:117] "RemoveContainer" containerID="ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.015942 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.016853 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-6cfkd" podStartSLOduration=3.016834158 podStartE2EDuration="3.016834158s" podCreationTimestamp="2026-02-21 07:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:04:46.006906058 +0000 UTC m=+1061.039990256" watchObservedRunningTime="2026-02-21 07:04:46.016834158 +0000 UTC m=+1061.049918356" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.032346 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-j8m4b" podStartSLOduration=3.032321031 podStartE2EDuration="3.032321031s" podCreationTimestamp="2026-02-21 07:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:04:46.024668752 +0000 UTC m=+1061.057752950" watchObservedRunningTime="2026-02-21 07:04:46.032321031 +0000 UTC m=+1061.065405229" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.036737 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"c99eabcd7cdc00f7af4fa074914b442d7ae5de65041a878335f0f81531e57443"} Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.052678 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.052709 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hjs7\" (UniqueName: \"kubernetes.io/projected/85621024-c5dd-4598-817a-62024db91c1d-kube-api-access-2hjs7\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.052720 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.054077 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-rf689" podStartSLOduration=2.565535523 podStartE2EDuration="7.054041552s" podCreationTimestamp="2026-02-21 07:04:39 +0000 UTC" firstStartedPulling="2026-02-21 07:04:40.56998686 +0000 UTC m=+1055.603071058" lastFinishedPulling="2026-02-21 07:04:45.058492889 +0000 UTC m=+1060.091577087" observedRunningTime="2026-02-21 07:04:46.044561695 +0000 UTC m=+1061.077645893" watchObservedRunningTime="2026-02-21 07:04:46.054041552 +0000 UTC m=+1061.087125750" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.062022 4820 scope.go:117] "RemoveContainer" containerID="9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.085077 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b298-account-create-update-wh2wv" podStartSLOduration=3.085057548 podStartE2EDuration="3.085057548s" podCreationTimestamp="2026-02-21 07:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:04:46.083404123 +0000 UTC m=+1061.116488321" watchObservedRunningTime="2026-02-21 07:04:46.085057548 +0000 UTC m=+1061.118141746" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.094304 4820 scope.go:117] "RemoveContainer" containerID="ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad" Feb 21 07:04:46 crc kubenswrapper[4820]: E0221 07:04:46.097486 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad\": container with ID starting with ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad not found: ID does not exist" containerID="ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.097526 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad"} err="failed to get container status \"ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad\": rpc error: code = NotFound desc = could not find container \"ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad\": container with ID starting with ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad not found: ID does not exist" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.097552 4820 scope.go:117] "RemoveContainer" containerID="9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.103294 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-47ln4"] Feb 21 07:04:46 crc kubenswrapper[4820]: E0221 07:04:46.104964 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e\": container with ID starting with 9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e not found: ID does not exist" containerID="9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.105016 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e"} err="failed to get container status \"9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e\": rpc error: code = NotFound desc = could not find container \"9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e\": container with ID starting with 9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e not found: ID does not exist" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.111334 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-47ln4"] Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.051380 4820 generic.go:334] "Generic (PLEG): container finished" podID="b290d702-774e-48b8-a243-5a9c648740a7" containerID="3d73b26b5221cdf8b2f3495526d1e7baef6e58d18c45f1b76e76efd304e84f0f" exitCode=0 Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.051439 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8ba-account-create-update-wmp66" event={"ID":"b290d702-774e-48b8-a243-5a9c648740a7","Type":"ContainerDied","Data":"3d73b26b5221cdf8b2f3495526d1e7baef6e58d18c45f1b76e76efd304e84f0f"} Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.054564 4820 generic.go:334] "Generic (PLEG): container finished" podID="8377d0c3-40a1-4a4a-b6c8-67f66dfa602d" containerID="ff0159151c6f141c22cffbaa81dad0f0b8a12039ef73dc3cf246a84b8885a789" exitCode=0 Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.054681 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6cfkd" event={"ID":"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d","Type":"ContainerDied","Data":"ff0159151c6f141c22cffbaa81dad0f0b8a12039ef73dc3cf246a84b8885a789"} Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.056508 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf044875-b3ef-48f5-b802-1bd167de5685" containerID="51679703ae2158b53bc0911e57a3e4d6e461f24e956bb1ea7408f2cb69b87ef1" exitCode=0 Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.056598 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j8m4b" event={"ID":"cf044875-b3ef-48f5-b802-1bd167de5685","Type":"ContainerDied","Data":"51679703ae2158b53bc0911e57a3e4d6e461f24e956bb1ea7408f2cb69b87ef1"} Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.058748 4820 generic.go:334] "Generic (PLEG): container finished" podID="d781b010-be2e-465d-9789-d6188ac5a30e" containerID="77ef8fafad5e6b7303c2ab29a54ec70cbb2ea080725bfabd09344c5407b83c16" exitCode=0 Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.058913 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b298-account-create-update-wh2wv" event={"ID":"d781b010-be2e-465d-9789-d6188ac5a30e","Type":"ContainerDied","Data":"77ef8fafad5e6b7303c2ab29a54ec70cbb2ea080725bfabd09344c5407b83c16"} Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.670423 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2x7vh"] Feb 21 07:04:47 crc kubenswrapper[4820]: E0221 07:04:47.670766 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85621024-c5dd-4598-817a-62024db91c1d" containerName="init" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.670790 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="85621024-c5dd-4598-817a-62024db91c1d" containerName="init" Feb 21 07:04:47 crc kubenswrapper[4820]: E0221 07:04:47.670827 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85621024-c5dd-4598-817a-62024db91c1d" containerName="dnsmasq-dns" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.670834 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="85621024-c5dd-4598-817a-62024db91c1d" containerName="dnsmasq-dns" Feb 21 07:04:47 crc kubenswrapper[4820]: E0221 07:04:47.670844 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21" containerName="mariadb-account-create-update" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.670851 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21" containerName="mariadb-account-create-update" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.671022 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="85621024-c5dd-4598-817a-62024db91c1d" containerName="dnsmasq-dns" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.671031 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21" containerName="mariadb-account-create-update" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.671738 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.682267 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2x7vh"] Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.705122 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85621024-c5dd-4598-817a-62024db91c1d" path="/var/lib/kubelet/pods/85621024-c5dd-4598-817a-62024db91c1d/volumes" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.765336 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-cd19-account-create-update-ccc55"] Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.766449 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.769008 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.779935 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-cd19-account-create-update-ccc55"] Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.782320 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-operator-scripts\") pod \"glance-db-create-2x7vh\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.782376 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8hzc\" (UniqueName: \"kubernetes.io/projected/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-kube-api-access-z8hzc\") pod \"glance-db-create-2x7vh\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.883911 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-operator-scripts\") pod \"glance-cd19-account-create-update-ccc55\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.883988 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-operator-scripts\") pod \"glance-db-create-2x7vh\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.884032 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8hzc\" (UniqueName: \"kubernetes.io/projected/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-kube-api-access-z8hzc\") pod \"glance-db-create-2x7vh\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.884059 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl9zh\" (UniqueName: \"kubernetes.io/projected/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-kube-api-access-rl9zh\") pod \"glance-cd19-account-create-update-ccc55\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.886190 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-operator-scripts\") pod \"glance-db-create-2x7vh\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.905196 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8hzc\" (UniqueName: \"kubernetes.io/projected/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-kube-api-access-z8hzc\") pod \"glance-db-create-2x7vh\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.985449 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-operator-scripts\") pod \"glance-cd19-account-create-update-ccc55\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.985516 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl9zh\" (UniqueName: \"kubernetes.io/projected/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-kube-api-access-rl9zh\") pod \"glance-cd19-account-create-update-ccc55\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.986260 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-operator-scripts\") pod \"glance-cd19-account-create-update-ccc55\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.002219 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl9zh\" (UniqueName: \"kubernetes.io/projected/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-kube-api-access-rl9zh\") pod \"glance-cd19-account-create-update-ccc55\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.020776 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.087071 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.477051 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.484158 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.598414 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjxcx\" (UniqueName: \"kubernetes.io/projected/cf044875-b3ef-48f5-b802-1bd167de5685-kube-api-access-jjxcx\") pod \"cf044875-b3ef-48f5-b802-1bd167de5685\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.598877 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf044875-b3ef-48f5-b802-1bd167de5685-operator-scripts\") pod \"cf044875-b3ef-48f5-b802-1bd167de5685\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.599028 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kv25\" (UniqueName: \"kubernetes.io/projected/b290d702-774e-48b8-a243-5a9c648740a7-kube-api-access-2kv25\") pod \"b290d702-774e-48b8-a243-5a9c648740a7\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.599083 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b290d702-774e-48b8-a243-5a9c648740a7-operator-scripts\") pod \"b290d702-774e-48b8-a243-5a9c648740a7\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.599934 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b290d702-774e-48b8-a243-5a9c648740a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b290d702-774e-48b8-a243-5a9c648740a7" (UID: "b290d702-774e-48b8-a243-5a9c648740a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.599909 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf044875-b3ef-48f5-b802-1bd167de5685-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf044875-b3ef-48f5-b802-1bd167de5685" (UID: "cf044875-b3ef-48f5-b802-1bd167de5685"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.604103 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf044875-b3ef-48f5-b802-1bd167de5685-kube-api-access-jjxcx" (OuterVolumeSpecName: "kube-api-access-jjxcx") pod "cf044875-b3ef-48f5-b802-1bd167de5685" (UID: "cf044875-b3ef-48f5-b802-1bd167de5685"). InnerVolumeSpecName "kube-api-access-jjxcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.604160 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b290d702-774e-48b8-a243-5a9c648740a7-kube-api-access-2kv25" (OuterVolumeSpecName: "kube-api-access-2kv25") pod "b290d702-774e-48b8-a243-5a9c648740a7" (UID: "b290d702-774e-48b8-a243-5a9c648740a7"). InnerVolumeSpecName "kube-api-access-2kv25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.613648 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.621831 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.624040 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.700815 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2gps\" (UniqueName: \"kubernetes.io/projected/d781b010-be2e-465d-9789-d6188ac5a30e-kube-api-access-d2gps\") pod \"d781b010-be2e-465d-9789-d6188ac5a30e\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.700887 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t9wr\" (UniqueName: \"kubernetes.io/projected/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-kube-api-access-5t9wr\") pod \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.700917 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d781b010-be2e-465d-9789-d6188ac5a30e-operator-scripts\") pod \"d781b010-be2e-465d-9789-d6188ac5a30e\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.700955 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-operator-scripts\") pod \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.701445 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjxcx\" (UniqueName: \"kubernetes.io/projected/cf044875-b3ef-48f5-b802-1bd167de5685-kube-api-access-jjxcx\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.701455 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8377d0c3-40a1-4a4a-b6c8-67f66dfa602d" (UID: "8377d0c3-40a1-4a4a-b6c8-67f66dfa602d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.701462 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf044875-b3ef-48f5-b802-1bd167de5685-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.701490 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kv25\" (UniqueName: \"kubernetes.io/projected/b290d702-774e-48b8-a243-5a9c648740a7-kube-api-access-2kv25\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.701477 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d781b010-be2e-465d-9789-d6188ac5a30e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d781b010-be2e-465d-9789-d6188ac5a30e" (UID: "d781b010-be2e-465d-9789-d6188ac5a30e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.701501 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b290d702-774e-48b8-a243-5a9c648740a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.705684 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-kube-api-access-5t9wr" (OuterVolumeSpecName: "kube-api-access-5t9wr") pod "8377d0c3-40a1-4a4a-b6c8-67f66dfa602d" (UID: "8377d0c3-40a1-4a4a-b6c8-67f66dfa602d"). InnerVolumeSpecName "kube-api-access-5t9wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.705832 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d781b010-be2e-465d-9789-d6188ac5a30e-kube-api-access-d2gps" (OuterVolumeSpecName: "kube-api-access-d2gps") pod "d781b010-be2e-465d-9789-d6188ac5a30e" (UID: "d781b010-be2e-465d-9789-d6188ac5a30e"). InnerVolumeSpecName "kube-api-access-d2gps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.726167 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-cd19-account-create-update-ccc55"] Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.761742 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2x7vh"] Feb 21 07:04:48 crc kubenswrapper[4820]: W0221 07:04:48.768489 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1974d89_b3a1_4cc5_b113_fb39248e5bf0.slice/crio-cf6ed11bf6ae181e0cc0aee7eef6db8d6cab8290a128df18d7cfd8cf3b323850 WatchSource:0}: Error finding container cf6ed11bf6ae181e0cc0aee7eef6db8d6cab8290a128df18d7cfd8cf3b323850: Status 404 returned error can't find the container with id cf6ed11bf6ae181e0cc0aee7eef6db8d6cab8290a128df18d7cfd8cf3b323850 Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.803127 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t9wr\" (UniqueName: \"kubernetes.io/projected/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-kube-api-access-5t9wr\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.804551 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d781b010-be2e-465d-9789-d6188ac5a30e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.804634 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.804717 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2gps\" (UniqueName: \"kubernetes.io/projected/d781b010-be2e-465d-9789-d6188ac5a30e-kube-api-access-d2gps\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.073762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cd19-account-create-update-ccc55" event={"ID":"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce","Type":"ContainerStarted","Data":"ebca1bc305e6cb051db04835594d022509a4dd1726bfbffcfc0b2262d64b6ee2"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.074039 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cd19-account-create-update-ccc55" event={"ID":"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce","Type":"ContainerStarted","Data":"8a7ec790863c179b2b5b7eb2a5ebbaeb76d3f27e1b618da17f59d0bfc7013923"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.076012 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6cfkd" event={"ID":"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d","Type":"ContainerDied","Data":"c3715fd2e582aad1ca2d36ee76fef77035e2827c6d6984f0a9fbebf8093fb91c"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.076044 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3715fd2e582aad1ca2d36ee76fef77035e2827c6d6984f0a9fbebf8093fb91c" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.076062 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.078156 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j8m4b" event={"ID":"cf044875-b3ef-48f5-b802-1bd167de5685","Type":"ContainerDied","Data":"2fb3997f67c3fc260d305425e7a58e7f1b3efb875f6d7e2dd0a4d15317a90b89"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.078181 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fb3997f67c3fc260d305425e7a58e7f1b3efb875f6d7e2dd0a4d15317a90b89" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.078211 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.079392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b298-account-create-update-wh2wv" event={"ID":"d781b010-be2e-465d-9789-d6188ac5a30e","Type":"ContainerDied","Data":"ce8b546c66c977997ef40cbbd237c00f88b1d5c8de3f9b7919f873c4bd98119c"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.079416 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce8b546c66c977997ef40cbbd237c00f88b1d5c8de3f9b7919f873c4bd98119c" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.079476 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.087876 4820 generic.go:334] "Generic (PLEG): container finished" podID="e1974d89-b3a1-4cc5-b113-fb39248e5bf0" containerID="4dd5abb92c8dda3b5eae940d15310c89c1fabe5b33b14d2a4979ab885abf315a" exitCode=0 Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.087934 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2x7vh" event={"ID":"e1974d89-b3a1-4cc5-b113-fb39248e5bf0","Type":"ContainerDied","Data":"4dd5abb92c8dda3b5eae940d15310c89c1fabe5b33b14d2a4979ab885abf315a"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.087958 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2x7vh" event={"ID":"e1974d89-b3a1-4cc5-b113-fb39248e5bf0","Type":"ContainerStarted","Data":"cf6ed11bf6ae181e0cc0aee7eef6db8d6cab8290a128df18d7cfd8cf3b323850"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.090541 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8ba-account-create-update-wmp66" event={"ID":"b290d702-774e-48b8-a243-5a9c648740a7","Type":"ContainerDied","Data":"6e6d73f6df79b75ff6c574af7f852d64ccd80a310be4316368c220990db295a7"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.090572 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e6d73f6df79b75ff6c574af7f852d64ccd80a310be4316368c220990db295a7" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.090638 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.624742 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mxq6b"] Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.632025 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mxq6b"] Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.709892 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21" path="/var/lib/kubelet/pods/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21/volumes" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.710611 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-n8n84"] Feb 21 07:04:49 crc kubenswrapper[4820]: E0221 07:04:49.710859 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8377d0c3-40a1-4a4a-b6c8-67f66dfa602d" containerName="mariadb-database-create" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.710876 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8377d0c3-40a1-4a4a-b6c8-67f66dfa602d" containerName="mariadb-database-create" Feb 21 07:04:49 crc kubenswrapper[4820]: E0221 07:04:49.710888 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d781b010-be2e-465d-9789-d6188ac5a30e" containerName="mariadb-account-create-update" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.710894 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d781b010-be2e-465d-9789-d6188ac5a30e" containerName="mariadb-account-create-update" Feb 21 07:04:49 crc kubenswrapper[4820]: E0221 07:04:49.710909 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b290d702-774e-48b8-a243-5a9c648740a7" containerName="mariadb-account-create-update" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.710915 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b290d702-774e-48b8-a243-5a9c648740a7" containerName="mariadb-account-create-update" Feb 21 07:04:49 crc kubenswrapper[4820]: E0221 07:04:49.710934 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf044875-b3ef-48f5-b802-1bd167de5685" containerName="mariadb-database-create" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.710940 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf044875-b3ef-48f5-b802-1bd167de5685" containerName="mariadb-database-create" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.711102 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf044875-b3ef-48f5-b802-1bd167de5685" containerName="mariadb-database-create" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.711112 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8377d0c3-40a1-4a4a-b6c8-67f66dfa602d" containerName="mariadb-database-create" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.711124 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b290d702-774e-48b8-a243-5a9c648740a7" containerName="mariadb-account-create-update" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.711135 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d781b010-be2e-465d-9789-d6188ac5a30e" containerName="mariadb-account-create-update" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.711568 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-n8n84"] Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.711708 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.714065 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.824808 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc5af9d-a695-46e8-94c2-acfa134131a7-operator-scripts\") pod \"root-account-create-update-n8n84\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.824956 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49bqg\" (UniqueName: \"kubernetes.io/projected/4fc5af9d-a695-46e8-94c2-acfa134131a7-kube-api-access-49bqg\") pod \"root-account-create-update-n8n84\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.927486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc5af9d-a695-46e8-94c2-acfa134131a7-operator-scripts\") pod \"root-account-create-update-n8n84\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.927548 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49bqg\" (UniqueName: \"kubernetes.io/projected/4fc5af9d-a695-46e8-94c2-acfa134131a7-kube-api-access-49bqg\") pod \"root-account-create-update-n8n84\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.928790 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc5af9d-a695-46e8-94c2-acfa134131a7-operator-scripts\") pod \"root-account-create-update-n8n84\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.951613 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49bqg\" (UniqueName: \"kubernetes.io/projected/4fc5af9d-a695-46e8-94c2-acfa134131a7-kube-api-access-49bqg\") pod \"root-account-create-update-n8n84\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.032881 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.102636 4820 generic.go:334] "Generic (PLEG): container finished" podID="0d0b59ad-da5f-4279-8aa4-f56bd575a5ce" containerID="ebca1bc305e6cb051db04835594d022509a4dd1726bfbffcfc0b2262d64b6ee2" exitCode=0 Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.103062 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cd19-account-create-update-ccc55" event={"ID":"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce","Type":"ContainerDied","Data":"ebca1bc305e6cb051db04835594d022509a4dd1726bfbffcfc0b2262d64b6ee2"} Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.506721 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-n8n84"] Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.541998 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.574551 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.641276 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8hzc\" (UniqueName: \"kubernetes.io/projected/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-kube-api-access-z8hzc\") pod \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.641338 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl9zh\" (UniqueName: \"kubernetes.io/projected/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-kube-api-access-rl9zh\") pod \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.641415 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-operator-scripts\") pod \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.641649 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-operator-scripts\") pod \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.642265 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1974d89-b3a1-4cc5-b113-fb39248e5bf0" (UID: "e1974d89-b3a1-4cc5-b113-fb39248e5bf0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.642652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d0b59ad-da5f-4279-8aa4-f56bd575a5ce" (UID: "0d0b59ad-da5f-4279-8aa4-f56bd575a5ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.647262 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-kube-api-access-rl9zh" (OuterVolumeSpecName: "kube-api-access-rl9zh") pod "0d0b59ad-da5f-4279-8aa4-f56bd575a5ce" (UID: "0d0b59ad-da5f-4279-8aa4-f56bd575a5ce"). InnerVolumeSpecName "kube-api-access-rl9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.647308 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-kube-api-access-z8hzc" (OuterVolumeSpecName: "kube-api-access-z8hzc") pod "e1974d89-b3a1-4cc5-b113-fb39248e5bf0" (UID: "e1974d89-b3a1-4cc5-b113-fb39248e5bf0"). InnerVolumeSpecName "kube-api-access-z8hzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.743889 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.743933 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.743949 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8hzc\" (UniqueName: \"kubernetes.io/projected/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-kube-api-access-z8hzc\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.743963 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl9zh\" (UniqueName: \"kubernetes.io/projected/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-kube-api-access-rl9zh\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.132994 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2x7vh" event={"ID":"e1974d89-b3a1-4cc5-b113-fb39248e5bf0","Type":"ContainerDied","Data":"cf6ed11bf6ae181e0cc0aee7eef6db8d6cab8290a128df18d7cfd8cf3b323850"} Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.134226 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf6ed11bf6ae181e0cc0aee7eef6db8d6cab8290a128df18d7cfd8cf3b323850" Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.134196 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.135700 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cd19-account-create-update-ccc55" event={"ID":"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce","Type":"ContainerDied","Data":"8a7ec790863c179b2b5b7eb2a5ebbaeb76d3f27e1b618da17f59d0bfc7013923"} Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.135733 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a7ec790863c179b2b5b7eb2a5ebbaeb76d3f27e1b618da17f59d0bfc7013923" Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.135781 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.137474 4820 generic.go:334] "Generic (PLEG): container finished" podID="4fc5af9d-a695-46e8-94c2-acfa134131a7" containerID="4d5fc8e1fa59379f7fa36b4bb94241f9192d59f0637e2f4694cd6d2809542488" exitCode=0 Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.137521 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n8n84" event={"ID":"4fc5af9d-a695-46e8-94c2-acfa134131a7","Type":"ContainerDied","Data":"4d5fc8e1fa59379f7fa36b4bb94241f9192d59f0637e2f4694cd6d2809542488"} Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.137550 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n8n84" event={"ID":"4fc5af9d-a695-46e8-94c2-acfa134131a7","Type":"ContainerStarted","Data":"37095b5f5021c170b115691a74b530a96a8b753f8dbd3bbb0142dea2a73ec810"} Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.763067 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:51 crc kubenswrapper[4820]: E0221 07:04:51.763575 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 21 07:04:51 crc kubenswrapper[4820]: E0221 07:04:51.763606 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 21 07:04:51 crc kubenswrapper[4820]: E0221 07:04:51.763665 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift podName:b2200daa-1861-49f4-965a-68417ec65542 nodeName:}" failed. No retries permitted until 2026-02-21 07:05:07.763644426 +0000 UTC m=+1082.796728624 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift") pod "swift-storage-0" (UID: "b2200daa-1861-49f4-965a-68417ec65542") : configmap "swift-ring-files" not found Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.147648 4820 generic.go:334] "Generic (PLEG): container finished" podID="3f798ecc-7cdf-4b7b-b8c9-0754d3391676" containerID="fefa9ef65a27a95fd0fbfd9f605222ae2b400c17ddf7734534b5e86974696a63" exitCode=0 Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.147856 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rf689" event={"ID":"3f798ecc-7cdf-4b7b-b8c9-0754d3391676","Type":"ContainerDied","Data":"fefa9ef65a27a95fd0fbfd9f605222ae2b400c17ddf7734534b5e86974696a63"} Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.507909 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.586969 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49bqg\" (UniqueName: \"kubernetes.io/projected/4fc5af9d-a695-46e8-94c2-acfa134131a7-kube-api-access-49bqg\") pod \"4fc5af9d-a695-46e8-94c2-acfa134131a7\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.587029 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc5af9d-a695-46e8-94c2-acfa134131a7-operator-scripts\") pod \"4fc5af9d-a695-46e8-94c2-acfa134131a7\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.587881 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fc5af9d-a695-46e8-94c2-acfa134131a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fc5af9d-a695-46e8-94c2-acfa134131a7" (UID: "4fc5af9d-a695-46e8-94c2-acfa134131a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.592016 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc5af9d-a695-46e8-94c2-acfa134131a7-kube-api-access-49bqg" (OuterVolumeSpecName: "kube-api-access-49bqg") pod "4fc5af9d-a695-46e8-94c2-acfa134131a7" (UID: "4fc5af9d-a695-46e8-94c2-acfa134131a7"). InnerVolumeSpecName "kube-api-access-49bqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.688754 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49bqg\" (UniqueName: \"kubernetes.io/projected/4fc5af9d-a695-46e8-94c2-acfa134131a7-kube-api-access-49bqg\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.688801 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc5af9d-a695-46e8-94c2-acfa134131a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.922705 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-5knjn"] Feb 21 07:04:52 crc kubenswrapper[4820]: E0221 07:04:52.923271 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1974d89-b3a1-4cc5-b113-fb39248e5bf0" containerName="mariadb-database-create" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.923300 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1974d89-b3a1-4cc5-b113-fb39248e5bf0" containerName="mariadb-database-create" Feb 21 07:04:52 crc kubenswrapper[4820]: E0221 07:04:52.923337 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc5af9d-a695-46e8-94c2-acfa134131a7" containerName="mariadb-account-create-update" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.923350 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc5af9d-a695-46e8-94c2-acfa134131a7" containerName="mariadb-account-create-update" Feb 21 07:04:52 crc kubenswrapper[4820]: E0221 07:04:52.923378 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0b59ad-da5f-4279-8aa4-f56bd575a5ce" containerName="mariadb-account-create-update" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.923390 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0b59ad-da5f-4279-8aa4-f56bd575a5ce" containerName="mariadb-account-create-update" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.923587 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1974d89-b3a1-4cc5-b113-fb39248e5bf0" containerName="mariadb-database-create" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.923612 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d0b59ad-da5f-4279-8aa4-f56bd575a5ce" containerName="mariadb-account-create-update" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.923623 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fc5af9d-a695-46e8-94c2-acfa134131a7" containerName="mariadb-account-create-update" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.924353 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.926652 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bl7bk" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.926931 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.931751 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5knjn"] Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.991391 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sfpp9" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" probeResult="failure" output=< Feb 21 07:04:52 crc kubenswrapper[4820]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 21 07:04:52 crc kubenswrapper[4820]: > Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.993175 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-db-sync-config-data\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.993463 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-config-data\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.993649 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jsg5\" (UniqueName: \"kubernetes.io/projected/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-kube-api-access-2jsg5\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.993719 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-combined-ca-bundle\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.015422 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.058664 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.095668 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-config-data\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.095866 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jsg5\" (UniqueName: \"kubernetes.io/projected/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-kube-api-access-2jsg5\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.095922 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-combined-ca-bundle\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.095995 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-db-sync-config-data\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.099753 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-config-data\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.100854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-db-sync-config-data\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.101023 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-combined-ca-bundle\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.117555 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jsg5\" (UniqueName: \"kubernetes.io/projected/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-kube-api-access-2jsg5\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.159112 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n8n84" event={"ID":"4fc5af9d-a695-46e8-94c2-acfa134131a7","Type":"ContainerDied","Data":"37095b5f5021c170b115691a74b530a96a8b753f8dbd3bbb0142dea2a73ec810"} Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.159165 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37095b5f5021c170b115691a74b530a96a8b753f8dbd3bbb0142dea2a73ec810" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.159170 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.253140 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.286121 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sfpp9-config-5txw6"] Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.287539 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.298465 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.305556 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sfpp9-config-5txw6"] Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.402836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run-ovn\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.403152 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-scripts\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.403187 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-additional-scripts\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.403305 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.403399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-log-ovn\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.403443 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkdgj\" (UniqueName: \"kubernetes.io/projected/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-kube-api-access-vkdgj\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505125 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505229 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-log-ovn\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505272 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkdgj\" (UniqueName: \"kubernetes.io/projected/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-kube-api-access-vkdgj\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505332 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run-ovn\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505356 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-scripts\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505382 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-additional-scripts\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505520 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505912 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-log-ovn\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505969 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run-ovn\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.506347 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-additional-scripts\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.507786 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-scripts\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.522027 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkdgj\" (UniqueName: \"kubernetes.io/projected/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-kube-api-access-vkdgj\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.574557 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.624491 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.708363 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4crq8\" (UniqueName: \"kubernetes.io/projected/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-kube-api-access-4crq8\") pod \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.708399 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-dispersionconf\") pod \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.708488 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-swiftconf\") pod \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.708609 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-ring-data-devices\") pod \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.708634 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-combined-ca-bundle\") pod \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.708672 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-etc-swift\") pod \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.708707 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-scripts\") pod \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.709690 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3f798ecc-7cdf-4b7b-b8c9-0754d3391676" (UID: "3f798ecc-7cdf-4b7b-b8c9-0754d3391676"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.716459 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3f798ecc-7cdf-4b7b-b8c9-0754d3391676" (UID: "3f798ecc-7cdf-4b7b-b8c9-0754d3391676"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.717252 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-kube-api-access-4crq8" (OuterVolumeSpecName: "kube-api-access-4crq8") pod "3f798ecc-7cdf-4b7b-b8c9-0754d3391676" (UID: "3f798ecc-7cdf-4b7b-b8c9-0754d3391676"). InnerVolumeSpecName "kube-api-access-4crq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.738023 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3f798ecc-7cdf-4b7b-b8c9-0754d3391676" (UID: "3f798ecc-7cdf-4b7b-b8c9-0754d3391676"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.738259 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f798ecc-7cdf-4b7b-b8c9-0754d3391676" (UID: "3f798ecc-7cdf-4b7b-b8c9-0754d3391676"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.742352 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3f798ecc-7cdf-4b7b-b8c9-0754d3391676" (UID: "3f798ecc-7cdf-4b7b-b8c9-0754d3391676"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.742684 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-scripts" (OuterVolumeSpecName: "scripts") pod "3f798ecc-7cdf-4b7b-b8c9-0754d3391676" (UID: "3f798ecc-7cdf-4b7b-b8c9-0754d3391676"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.826105 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.826142 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4crq8\" (UniqueName: \"kubernetes.io/projected/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-kube-api-access-4crq8\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.826153 4820 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.826170 4820 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.826181 4820 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.826295 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.826308 4820 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.868160 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5knjn"] Feb 21 07:04:54 crc kubenswrapper[4820]: I0221 07:04:54.088510 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sfpp9-config-5txw6"] Feb 21 07:04:54 crc kubenswrapper[4820]: I0221 07:04:54.168478 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9-config-5txw6" event={"ID":"6d564dd0-292f-4a24-9f18-a1e1bac56e9d","Type":"ContainerStarted","Data":"9ed44088fdf574758e662e5ceff9a2c2ab741fead1b9e910642245686b79825b"} Feb 21 07:04:54 crc kubenswrapper[4820]: I0221 07:04:54.169360 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5knjn" event={"ID":"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc","Type":"ContainerStarted","Data":"06c7c445d64ced196c5da3af11e304c1072522569a7cfbf0d406157ab3cc8687"} Feb 21 07:04:54 crc kubenswrapper[4820]: I0221 07:04:54.172593 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rf689" event={"ID":"3f798ecc-7cdf-4b7b-b8c9-0754d3391676","Type":"ContainerDied","Data":"c26d73d13c8ed1f73935a923bee354cfe61457ba1d8a1c7f380f8b963015bff4"} Feb 21 07:04:54 crc kubenswrapper[4820]: I0221 07:04:54.172617 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c26d73d13c8ed1f73935a923bee354cfe61457ba1d8a1c7f380f8b963015bff4" Feb 21 07:04:54 crc kubenswrapper[4820]: I0221 07:04:54.172706 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:55 crc kubenswrapper[4820]: I0221 07:04:55.181984 4820 generic.go:334] "Generic (PLEG): container finished" podID="6d564dd0-292f-4a24-9f18-a1e1bac56e9d" containerID="25ee57b0b664af1977c29401acb29880d1b373991571fe5848274a63a6cd3a3e" exitCode=0 Feb 21 07:04:55 crc kubenswrapper[4820]: I0221 07:04:55.182040 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9-config-5txw6" event={"ID":"6d564dd0-292f-4a24-9f18-a1e1bac56e9d","Type":"ContainerDied","Data":"25ee57b0b664af1977c29401acb29880d1b373991571fe5848274a63a6cd3a3e"} Feb 21 07:04:55 crc kubenswrapper[4820]: I0221 07:04:55.184809 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa49984a-9511-4449-adc6-997899961f73" containerID="946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc" exitCode=0 Feb 21 07:04:55 crc kubenswrapper[4820]: I0221 07:04:55.184869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa49984a-9511-4449-adc6-997899961f73","Type":"ContainerDied","Data":"946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc"} Feb 21 07:04:55 crc kubenswrapper[4820]: I0221 07:04:55.186946 4820 generic.go:334] "Generic (PLEG): container finished" podID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerID="b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012" exitCode=0 Feb 21 07:04:55 crc kubenswrapper[4820]: I0221 07:04:55.186980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b1242f9-d2ac-493c-bc89-43f7be597a75","Type":"ContainerDied","Data":"b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012"} Feb 21 07:04:55 crc kubenswrapper[4820]: I0221 07:04:55.814457 4820 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod924235f7-e875-49cd-b7c1-1cfa96515a97"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod924235f7-e875-49cd-b7c1-1cfa96515a97] : Timed out while waiting for systemd to remove kubepods-besteffort-pod924235f7_e875_49cd_b7c1_1cfa96515a97.slice" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.186656 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-n8n84"] Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.194411 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-n8n84"] Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.196368 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa49984a-9511-4449-adc6-997899961f73","Type":"ContainerStarted","Data":"7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078"} Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.196661 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.200793 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b1242f9-d2ac-493c-bc89-43f7be597a75","Type":"ContainerStarted","Data":"0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f"} Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.201350 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.222286 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.608229889 podStartE2EDuration="59.222268722s" podCreationTimestamp="2026-02-21 07:03:57 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.136865967 +0000 UTC m=+1029.169950155" lastFinishedPulling="2026-02-21 07:04:20.75090479 +0000 UTC m=+1035.783988988" observedRunningTime="2026-02-21 07:04:56.216314068 +0000 UTC m=+1071.249398296" watchObservedRunningTime="2026-02-21 07:04:56.222268722 +0000 UTC m=+1071.255352920" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.242632 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=51.137134744 podStartE2EDuration="58.242611596s" podCreationTimestamp="2026-02-21 07:03:58 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.255948344 +0000 UTC m=+1029.289032542" lastFinishedPulling="2026-02-21 07:04:21.361425196 +0000 UTC m=+1036.394509394" observedRunningTime="2026-02-21 07:04:56.241449124 +0000 UTC m=+1071.274533342" watchObservedRunningTime="2026-02-21 07:04:56.242611596 +0000 UTC m=+1071.275695794" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.498595 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593324 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkdgj\" (UniqueName: \"kubernetes.io/projected/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-kube-api-access-vkdgj\") pod \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593410 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-additional-scripts\") pod \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593474 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run-ovn\") pod \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593614 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-scripts\") pod \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593648 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run\") pod \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593656 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6d564dd0-292f-4a24-9f18-a1e1bac56e9d" (UID: "6d564dd0-292f-4a24-9f18-a1e1bac56e9d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593696 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-log-ovn\") pod \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593908 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run" (OuterVolumeSpecName: "var-run") pod "6d564dd0-292f-4a24-9f18-a1e1bac56e9d" (UID: "6d564dd0-292f-4a24-9f18-a1e1bac56e9d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593972 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6d564dd0-292f-4a24-9f18-a1e1bac56e9d" (UID: "6d564dd0-292f-4a24-9f18-a1e1bac56e9d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.594206 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6d564dd0-292f-4a24-9f18-a1e1bac56e9d" (UID: "6d564dd0-292f-4a24-9f18-a1e1bac56e9d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.594386 4820 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.594455 4820 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.594511 4820 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.594515 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-scripts" (OuterVolumeSpecName: "scripts") pod "6d564dd0-292f-4a24-9f18-a1e1bac56e9d" (UID: "6d564dd0-292f-4a24-9f18-a1e1bac56e9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.610212 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-kube-api-access-vkdgj" (OuterVolumeSpecName: "kube-api-access-vkdgj") pod "6d564dd0-292f-4a24-9f18-a1e1bac56e9d" (UID: "6d564dd0-292f-4a24-9f18-a1e1bac56e9d"). InnerVolumeSpecName "kube-api-access-vkdgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.696354 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.696388 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkdgj\" (UniqueName: \"kubernetes.io/projected/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-kube-api-access-vkdgj\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.696400 4820 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.209175 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9-config-5txw6" event={"ID":"6d564dd0-292f-4a24-9f18-a1e1bac56e9d","Type":"ContainerDied","Data":"9ed44088fdf574758e662e5ceff9a2c2ab741fead1b9e910642245686b79825b"} Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.209223 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ed44088fdf574758e662e5ceff9a2c2ab741fead1b9e910642245686b79825b" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.209290 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.586648 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sfpp9-config-5txw6"] Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.602860 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sfpp9-config-5txw6"] Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.686983 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sfpp9-config-jwq6z"] Feb 21 07:04:57 crc kubenswrapper[4820]: E0221 07:04:57.687365 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f798ecc-7cdf-4b7b-b8c9-0754d3391676" containerName="swift-ring-rebalance" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.687388 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f798ecc-7cdf-4b7b-b8c9-0754d3391676" containerName="swift-ring-rebalance" Feb 21 07:04:57 crc kubenswrapper[4820]: E0221 07:04:57.687416 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d564dd0-292f-4a24-9f18-a1e1bac56e9d" containerName="ovn-config" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.687425 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d564dd0-292f-4a24-9f18-a1e1bac56e9d" containerName="ovn-config" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.687637 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f798ecc-7cdf-4b7b-b8c9-0754d3391676" containerName="swift-ring-rebalance" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.687662 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d564dd0-292f-4a24-9f18-a1e1bac56e9d" containerName="ovn-config" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.688348 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.698040 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.707159 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fc5af9d-a695-46e8-94c2-acfa134131a7" path="/var/lib/kubelet/pods/4fc5af9d-a695-46e8-94c2-acfa134131a7/volumes" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.707747 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d564dd0-292f-4a24-9f18-a1e1bac56e9d" path="/var/lib/kubelet/pods/6d564dd0-292f-4a24-9f18-a1e1bac56e9d/volumes" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.708423 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sfpp9-config-jwq6z"] Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.812511 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-log-ovn\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.812598 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-scripts\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.812630 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run-ovn\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.812653 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc8kh\" (UniqueName: \"kubernetes.io/projected/6d4589d4-4df1-40d9-9af3-fedff5530ab1-kube-api-access-vc8kh\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.812718 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-additional-scripts\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.813059 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.914466 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-log-ovn\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.914553 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-scripts\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.914577 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run-ovn\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.914596 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc8kh\" (UniqueName: \"kubernetes.io/projected/6d4589d4-4df1-40d9-9af3-fedff5530ab1-kube-api-access-vc8kh\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.914631 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-additional-scripts\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.914677 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.914905 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.915086 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run-ovn\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.915468 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-additional-scripts\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.915780 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-log-ovn\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.916879 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-scripts\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.930908 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc8kh\" (UniqueName: \"kubernetes.io/projected/6d4589d4-4df1-40d9-9af3-fedff5530ab1-kube-api-access-vc8kh\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.976556 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:58 crc kubenswrapper[4820]: I0221 07:04:58.007649 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:58 crc kubenswrapper[4820]: I0221 07:04:58.465784 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sfpp9-config-jwq6z"] Feb 21 07:04:58 crc kubenswrapper[4820]: W0221 07:04:58.486576 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d4589d4_4df1_40d9_9af3_fedff5530ab1.slice/crio-3b2fd5db2417e14af88923f56274a3e97684fc0a3371a7473a1f9625fa0c1718 WatchSource:0}: Error finding container 3b2fd5db2417e14af88923f56274a3e97684fc0a3371a7473a1f9625fa0c1718: Status 404 returned error can't find the container with id 3b2fd5db2417e14af88923f56274a3e97684fc0a3371a7473a1f9625fa0c1718 Feb 21 07:04:59 crc kubenswrapper[4820]: I0221 07:04:59.227827 4820 generic.go:334] "Generic (PLEG): container finished" podID="6d4589d4-4df1-40d9-9af3-fedff5530ab1" containerID="6217a40428e0542093ddeccb7b2d5a7d3a0d949e486fb5723c5776887db5cdde" exitCode=0 Feb 21 07:04:59 crc kubenswrapper[4820]: I0221 07:04:59.227872 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9-config-jwq6z" event={"ID":"6d4589d4-4df1-40d9-9af3-fedff5530ab1","Type":"ContainerDied","Data":"6217a40428e0542093ddeccb7b2d5a7d3a0d949e486fb5723c5776887db5cdde"} Feb 21 07:04:59 crc kubenswrapper[4820]: I0221 07:04:59.228132 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9-config-jwq6z" event={"ID":"6d4589d4-4df1-40d9-9af3-fedff5530ab1","Type":"ContainerStarted","Data":"3b2fd5db2417e14af88923f56274a3e97684fc0a3371a7473a1f9625fa0c1718"} Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.207812 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bzcnx"] Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.208964 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.210899 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.218906 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bzcnx"] Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.271202 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-operator-scripts\") pod \"root-account-create-update-bzcnx\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.271437 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6rqk\" (UniqueName: \"kubernetes.io/projected/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-kube-api-access-m6rqk\") pod \"root-account-create-update-bzcnx\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.372603 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-operator-scripts\") pod \"root-account-create-update-bzcnx\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.372763 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6rqk\" (UniqueName: \"kubernetes.io/projected/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-kube-api-access-m6rqk\") pod \"root-account-create-update-bzcnx\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.373475 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-operator-scripts\") pod \"root-account-create-update-bzcnx\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.418359 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6rqk\" (UniqueName: \"kubernetes.io/projected/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-kube-api-access-m6rqk\") pod \"root-account-create-update-bzcnx\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.529173 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.459919 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554631 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run-ovn\") pod \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554693 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run\") pod \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554723 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-additional-scripts\") pod \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554718 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6d4589d4-4df1-40d9-9af3-fedff5530ab1" (UID: "6d4589d4-4df1-40d9-9af3-fedff5530ab1"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554761 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-scripts\") pod \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554798 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run" (OuterVolumeSpecName: "var-run") pod "6d4589d4-4df1-40d9-9af3-fedff5530ab1" (UID: "6d4589d4-4df1-40d9-9af3-fedff5530ab1"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554797 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc8kh\" (UniqueName: \"kubernetes.io/projected/6d4589d4-4df1-40d9-9af3-fedff5530ab1-kube-api-access-vc8kh\") pod \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554901 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-log-ovn\") pod \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.555130 4820 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.555141 4820 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.555173 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6d4589d4-4df1-40d9-9af3-fedff5530ab1" (UID: "6d4589d4-4df1-40d9-9af3-fedff5530ab1"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.555856 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6d4589d4-4df1-40d9-9af3-fedff5530ab1" (UID: "6d4589d4-4df1-40d9-9af3-fedff5530ab1"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.556398 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-scripts" (OuterVolumeSpecName: "scripts") pod "6d4589d4-4df1-40d9-9af3-fedff5530ab1" (UID: "6d4589d4-4df1-40d9-9af3-fedff5530ab1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.559073 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4589d4-4df1-40d9-9af3-fedff5530ab1-kube-api-access-vc8kh" (OuterVolumeSpecName: "kube-api-access-vc8kh") pod "6d4589d4-4df1-40d9-9af3-fedff5530ab1" (UID: "6d4589d4-4df1-40d9-9af3-fedff5530ab1"). InnerVolumeSpecName "kube-api-access-vc8kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.656856 4820 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.656895 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.656904 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc8kh\" (UniqueName: \"kubernetes.io/projected/6d4589d4-4df1-40d9-9af3-fedff5530ab1-kube-api-access-vc8kh\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.656914 4820 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.790699 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bzcnx"] Feb 21 07:05:05 crc kubenswrapper[4820]: W0221 07:05:05.801181 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6df65e7d_3ade_4585_9f5f_7a4b7c8bc8eb.slice/crio-a20eaeb10dd06f76f2c1f0f46a91b2a3402460b150aa71a9d120a8dfed9dc593 WatchSource:0}: Error finding container a20eaeb10dd06f76f2c1f0f46a91b2a3402460b150aa71a9d120a8dfed9dc593: Status 404 returned error can't find the container with id a20eaeb10dd06f76f2c1f0f46a91b2a3402460b150aa71a9d120a8dfed9dc593 Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.805717 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.287851 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9-config-jwq6z" event={"ID":"6d4589d4-4df1-40d9-9af3-fedff5530ab1","Type":"ContainerDied","Data":"3b2fd5db2417e14af88923f56274a3e97684fc0a3371a7473a1f9625fa0c1718"} Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.288121 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b2fd5db2417e14af88923f56274a3e97684fc0a3371a7473a1f9625fa0c1718" Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.287886 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.289894 4820 generic.go:334] "Generic (PLEG): container finished" podID="6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb" containerID="bccd056d3ccb7b521fe7131d2adc1ebf924abaee6a5315ab7005a0ebaf022fd8" exitCode=0 Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.289931 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bzcnx" event={"ID":"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb","Type":"ContainerDied","Data":"bccd056d3ccb7b521fe7131d2adc1ebf924abaee6a5315ab7005a0ebaf022fd8"} Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.289974 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bzcnx" event={"ID":"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb","Type":"ContainerStarted","Data":"a20eaeb10dd06f76f2c1f0f46a91b2a3402460b150aa71a9d120a8dfed9dc593"} Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.292149 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5knjn" event={"ID":"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc","Type":"ContainerStarted","Data":"3687cb41be17e324f8d8ae7287b8149bf97802e24e08623475454682c9f421e8"} Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.322880 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-5knjn" podStartSLOduration=2.687514647 podStartE2EDuration="14.322859035s" podCreationTimestamp="2026-02-21 07:04:52 +0000 UTC" firstStartedPulling="2026-02-21 07:04:53.84704097 +0000 UTC m=+1068.880125168" lastFinishedPulling="2026-02-21 07:05:05.482385358 +0000 UTC m=+1080.515469556" observedRunningTime="2026-02-21 07:05:06.318836495 +0000 UTC m=+1081.351920713" watchObservedRunningTime="2026-02-21 07:05:06.322859035 +0000 UTC m=+1081.355943243" Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.536417 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sfpp9-config-jwq6z"] Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.545053 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sfpp9-config-jwq6z"] Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.629750 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.690758 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-operator-scripts\") pod \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.690888 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6rqk\" (UniqueName: \"kubernetes.io/projected/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-kube-api-access-m6rqk\") pod \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.691674 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb" (UID: "6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.696568 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-kube-api-access-m6rqk" (OuterVolumeSpecName: "kube-api-access-m6rqk") pod "6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb" (UID: "6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb"). InnerVolumeSpecName "kube-api-access-m6rqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.708612 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d4589d4-4df1-40d9-9af3-fedff5530ab1" path="/var/lib/kubelet/pods/6d4589d4-4df1-40d9-9af3-fedff5530ab1/volumes" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.792524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.792618 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6rqk\" (UniqueName: \"kubernetes.io/projected/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-kube-api-access-m6rqk\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.792631 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.798558 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.811657 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 21 07:05:08 crc kubenswrapper[4820]: I0221 07:05:08.311667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bzcnx" event={"ID":"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb","Type":"ContainerDied","Data":"a20eaeb10dd06f76f2c1f0f46a91b2a3402460b150aa71a9d120a8dfed9dc593"} Feb 21 07:05:08 crc kubenswrapper[4820]: I0221 07:05:08.311943 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a20eaeb10dd06f76f2c1f0f46a91b2a3402460b150aa71a9d120a8dfed9dc593" Feb 21 07:05:08 crc kubenswrapper[4820]: I0221 07:05:08.311801 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:08 crc kubenswrapper[4820]: I0221 07:05:08.323325 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 21 07:05:08 crc kubenswrapper[4820]: W0221 07:05:08.325885 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2200daa_1861_49f4_965a_68417ec65542.slice/crio-0365054e0e1b957929429be30908085261342e98138a116476a25078e33fdc0f WatchSource:0}: Error finding container 0365054e0e1b957929429be30908085261342e98138a116476a25078e33fdc0f: Status 404 returned error can't find the container with id 0365054e0e1b957929429be30908085261342e98138a116476a25078e33fdc0f Feb 21 07:05:09 crc kubenswrapper[4820]: I0221 07:05:09.325189 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"0365054e0e1b957929429be30908085261342e98138a116476a25078e33fdc0f"} Feb 21 07:05:09 crc kubenswrapper[4820]: I0221 07:05:09.347420 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:05:09 crc kubenswrapper[4820]: I0221 07:05:09.646809 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 21 07:05:10 crc kubenswrapper[4820]: I0221 07:05:10.335922 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3"} Feb 21 07:05:10 crc kubenswrapper[4820]: I0221 07:05:10.336298 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53"} Feb 21 07:05:10 crc kubenswrapper[4820]: I0221 07:05:10.336317 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2"} Feb 21 07:05:10 crc kubenswrapper[4820]: I0221 07:05:10.336354 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80"} Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.165193 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-jng5b"] Feb 21 07:05:11 crc kubenswrapper[4820]: E0221 07:05:11.165601 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d4589d4-4df1-40d9-9af3-fedff5530ab1" containerName="ovn-config" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.165626 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d4589d4-4df1-40d9-9af3-fedff5530ab1" containerName="ovn-config" Feb 21 07:05:11 crc kubenswrapper[4820]: E0221 07:05:11.165665 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb" containerName="mariadb-account-create-update" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.165673 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb" containerName="mariadb-account-create-update" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.165832 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d4589d4-4df1-40d9-9af3-fedff5530ab1" containerName="ovn-config" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.165849 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb" containerName="mariadb-account-create-update" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.166471 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.189155 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jng5b"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.281248 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6976-account-create-update-mzpt2"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.282222 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.285288 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.291985 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6976-account-create-update-mzpt2"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.349359 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg8hh\" (UniqueName: \"kubernetes.io/projected/4b9dd869-f673-4077-b345-05b4e79eb590-kube-api-access-sg8hh\") pod \"cinder-db-create-jng5b\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.349636 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9dd869-f673-4077-b345-05b4e79eb590-operator-scripts\") pod \"cinder-db-create-jng5b\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.366651 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-w9fxb"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.367892 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.383436 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4e9a-account-create-update-55xqx"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.384579 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.391654 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.400167 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w9fxb"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.408127 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4e9a-account-create-update-55xqx"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.446452 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-68q2w"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.447483 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.449551 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.450264 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.450455 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.450723 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w79dl" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.451426 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg8hh\" (UniqueName: \"kubernetes.io/projected/4b9dd869-f673-4077-b345-05b4e79eb590-kube-api-access-sg8hh\") pod \"cinder-db-create-jng5b\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.451696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbwdq\" (UniqueName: \"kubernetes.io/projected/d69a9369-affe-4441-bf33-3c0f13540875-kube-api-access-fbwdq\") pod \"cinder-6976-account-create-update-mzpt2\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.451764 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69a9369-affe-4441-bf33-3c0f13540875-operator-scripts\") pod \"cinder-6976-account-create-update-mzpt2\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.451816 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9dd869-f673-4077-b345-05b4e79eb590-operator-scripts\") pod \"cinder-db-create-jng5b\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.452530 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9dd869-f673-4077-b345-05b4e79eb590-operator-scripts\") pod \"cinder-db-create-jng5b\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.465232 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-68q2w"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.470516 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lnssq"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.471567 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.484960 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lnssq"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.502383 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg8hh\" (UniqueName: \"kubernetes.io/projected/4b9dd869-f673-4077-b345-05b4e79eb590-kube-api-access-sg8hh\") pod \"cinder-db-create-jng5b\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553419 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbwdq\" (UniqueName: \"kubernetes.io/projected/d69a9369-affe-4441-bf33-3c0f13540875-kube-api-access-fbwdq\") pod \"cinder-6976-account-create-update-mzpt2\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553486 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-operator-scripts\") pod \"barbican-db-create-w9fxb\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553514 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8a463c-63a8-424f-a3ab-4e46390b8cca-operator-scripts\") pod \"barbican-4e9a-account-create-update-55xqx\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553536 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rptw6\" (UniqueName: \"kubernetes.io/projected/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-kube-api-access-rptw6\") pod \"barbican-db-create-w9fxb\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553593 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69a9369-affe-4441-bf33-3c0f13540875-operator-scripts\") pod \"cinder-6976-account-create-update-mzpt2\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553692 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-combined-ca-bundle\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553718 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fllw8\" (UniqueName: \"kubernetes.io/projected/f26b24bc-e904-49a1-b2bc-d140b0032b83-kube-api-access-fllw8\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553753 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccfxq\" (UniqueName: \"kubernetes.io/projected/8e8a463c-63a8-424f-a3ab-4e46390b8cca-kube-api-access-ccfxq\") pod \"barbican-4e9a-account-create-update-55xqx\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553777 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-config-data\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.554407 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69a9369-affe-4441-bf33-3c0f13540875-operator-scripts\") pod \"cinder-6976-account-create-update-mzpt2\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.569266 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbwdq\" (UniqueName: \"kubernetes.io/projected/d69a9369-affe-4441-bf33-3c0f13540875-kube-api-access-fbwdq\") pod \"cinder-6976-account-create-update-mzpt2\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.600226 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.655164 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-combined-ca-bundle\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.655210 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fllw8\" (UniqueName: \"kubernetes.io/projected/f26b24bc-e904-49a1-b2bc-d140b0032b83-kube-api-access-fllw8\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.655263 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccfxq\" (UniqueName: \"kubernetes.io/projected/8e8a463c-63a8-424f-a3ab-4e46390b8cca-kube-api-access-ccfxq\") pod \"barbican-4e9a-account-create-update-55xqx\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.655310 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-config-data\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.655569 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-operator-scripts\") pod \"neutron-db-create-lnssq\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.655976 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-operator-scripts\") pod \"barbican-db-create-w9fxb\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.656010 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8a463c-63a8-424f-a3ab-4e46390b8cca-operator-scripts\") pod \"barbican-4e9a-account-create-update-55xqx\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.656036 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rptw6\" (UniqueName: \"kubernetes.io/projected/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-kube-api-access-rptw6\") pod \"barbican-db-create-w9fxb\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.656070 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrds8\" (UniqueName: \"kubernetes.io/projected/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-kube-api-access-hrds8\") pod \"neutron-db-create-lnssq\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.656444 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-operator-scripts\") pod \"barbican-db-create-w9fxb\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.656692 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8a463c-63a8-424f-a3ab-4e46390b8cca-operator-scripts\") pod \"barbican-4e9a-account-create-update-55xqx\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.658934 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-combined-ca-bundle\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.659693 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-config-data\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.676494 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccfxq\" (UniqueName: \"kubernetes.io/projected/8e8a463c-63a8-424f-a3ab-4e46390b8cca-kube-api-access-ccfxq\") pod \"barbican-4e9a-account-create-update-55xqx\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.689813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fllw8\" (UniqueName: \"kubernetes.io/projected/f26b24bc-e904-49a1-b2bc-d140b0032b83-kube-api-access-fllw8\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.705355 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rptw6\" (UniqueName: \"kubernetes.io/projected/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-kube-api-access-rptw6\") pod \"barbican-db-create-w9fxb\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.705366 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c516-account-create-update-mxhpl"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.706267 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.707284 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.715534 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.723553 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c516-account-create-update-mxhpl"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.758111 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-operator-scripts\") pod \"neutron-db-create-lnssq\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.758181 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrds8\" (UniqueName: \"kubernetes.io/projected/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-kube-api-access-hrds8\") pod \"neutron-db-create-lnssq\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.759135 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-operator-scripts\") pod \"neutron-db-create-lnssq\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.771144 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.784400 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.791864 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrds8\" (UniqueName: \"kubernetes.io/projected/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-kube-api-access-hrds8\") pod \"neutron-db-create-lnssq\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.828602 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.869355 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqphh\" (UniqueName: \"kubernetes.io/projected/b8063e5a-6b15-4855-9ae2-5fdcc912b472-kube-api-access-cqphh\") pod \"neutron-c516-account-create-update-mxhpl\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.869438 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8063e5a-6b15-4855-9ae2-5fdcc912b472-operator-scripts\") pod \"neutron-c516-account-create-update-mxhpl\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.971747 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqphh\" (UniqueName: \"kubernetes.io/projected/b8063e5a-6b15-4855-9ae2-5fdcc912b472-kube-api-access-cqphh\") pod \"neutron-c516-account-create-update-mxhpl\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.971811 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8063e5a-6b15-4855-9ae2-5fdcc912b472-operator-scripts\") pod \"neutron-c516-account-create-update-mxhpl\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.972584 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8063e5a-6b15-4855-9ae2-5fdcc912b472-operator-scripts\") pod \"neutron-c516-account-create-update-mxhpl\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.996377 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.001014 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqphh\" (UniqueName: \"kubernetes.io/projected/b8063e5a-6b15-4855-9ae2-5fdcc912b472-kube-api-access-cqphh\") pod \"neutron-c516-account-create-update-mxhpl\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.105053 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.238668 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6976-account-create-update-mzpt2"] Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.350490 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6976-account-create-update-mzpt2" event={"ID":"d69a9369-affe-4441-bf33-3c0f13540875","Type":"ContainerStarted","Data":"83aa7c0bc22c00737da59e958e3d2d3b4c976a72a172bae6bb7e159573f091de"} Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.395979 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-68q2w"] Feb 21 07:05:12 crc kubenswrapper[4820]: W0221 07:05:12.398185 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf26b24bc_e904_49a1_b2bc_d140b0032b83.slice/crio-7e24aafd254628e994215fa2904318870dcc9a4e70c8b809fc232e12c6faf545 WatchSource:0}: Error finding container 7e24aafd254628e994215fa2904318870dcc9a4e70c8b809fc232e12c6faf545: Status 404 returned error can't find the container with id 7e24aafd254628e994215fa2904318870dcc9a4e70c8b809fc232e12c6faf545 Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.504304 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4e9a-account-create-update-55xqx"] Feb 21 07:05:12 crc kubenswrapper[4820]: W0221 07:05:12.520218 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e8a463c_63a8_424f_a3ab_4e46390b8cca.slice/crio-d22b9aa89457fa4b9e71c82c996799b20fb0531c15a153ed6e3fab6ce7936c6a WatchSource:0}: Error finding container d22b9aa89457fa4b9e71c82c996799b20fb0531c15a153ed6e3fab6ce7936c6a: Status 404 returned error can't find the container with id d22b9aa89457fa4b9e71c82c996799b20fb0531c15a153ed6e3fab6ce7936c6a Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.527131 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jng5b"] Feb 21 07:05:12 crc kubenswrapper[4820]: W0221 07:05:12.527652 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b9dd869_f673_4077_b345_05b4e79eb590.slice/crio-ad651295e6790de81b4d37f5e6c8931c1d983fa719ba0c2e89abe917c16fa032 WatchSource:0}: Error finding container ad651295e6790de81b4d37f5e6c8931c1d983fa719ba0c2e89abe917c16fa032: Status 404 returned error can't find the container with id ad651295e6790de81b4d37f5e6c8931c1d983fa719ba0c2e89abe917c16fa032 Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.541200 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lnssq"] Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.547949 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c516-account-create-update-mxhpl"] Feb 21 07:05:12 crc kubenswrapper[4820]: W0221 07:05:12.553249 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8063e5a_6b15_4855_9ae2_5fdcc912b472.slice/crio-ef5ebc3b873484840c7f045ff493a6f409c31830e26105733a9f04a8cdc8523f WatchSource:0}: Error finding container ef5ebc3b873484840c7f045ff493a6f409c31830e26105733a9f04a8cdc8523f: Status 404 returned error can't find the container with id ef5ebc3b873484840c7f045ff493a6f409c31830e26105733a9f04a8cdc8523f Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.673939 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w9fxb"] Feb 21 07:05:12 crc kubenswrapper[4820]: W0221 07:05:12.684699 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10d96043_ca9d_4dd0_aa3e_8bcd5941a97b.slice/crio-7bc6a983dfe97f711ff91ce8ce842ed3ee41235e0ed37ea18dd4c431b78873ed WatchSource:0}: Error finding container 7bc6a983dfe97f711ff91ce8ce842ed3ee41235e0ed37ea18dd4c431b78873ed: Status 404 returned error can't find the container with id 7bc6a983dfe97f711ff91ce8ce842ed3ee41235e0ed37ea18dd4c431b78873ed Feb 21 07:05:13 crc kubenswrapper[4820]: I0221 07:05:13.359360 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-68q2w" event={"ID":"f26b24bc-e904-49a1-b2bc-d140b0032b83","Type":"ContainerStarted","Data":"7e24aafd254628e994215fa2904318870dcc9a4e70c8b809fc232e12c6faf545"} Feb 21 07:05:13 crc kubenswrapper[4820]: I0221 07:05:13.360924 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lnssq" event={"ID":"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44","Type":"ContainerStarted","Data":"1fbc501023d995cbd34384639ed9732eac99f54df56741ae8eaa92c36c83d1f7"} Feb 21 07:05:13 crc kubenswrapper[4820]: I0221 07:05:13.361889 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jng5b" event={"ID":"4b9dd869-f673-4077-b345-05b4e79eb590","Type":"ContainerStarted","Data":"ad651295e6790de81b4d37f5e6c8931c1d983fa719ba0c2e89abe917c16fa032"} Feb 21 07:05:13 crc kubenswrapper[4820]: I0221 07:05:13.362836 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c516-account-create-update-mxhpl" event={"ID":"b8063e5a-6b15-4855-9ae2-5fdcc912b472","Type":"ContainerStarted","Data":"ef5ebc3b873484840c7f045ff493a6f409c31830e26105733a9f04a8cdc8523f"} Feb 21 07:05:13 crc kubenswrapper[4820]: I0221 07:05:13.363892 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w9fxb" event={"ID":"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b","Type":"ContainerStarted","Data":"7bc6a983dfe97f711ff91ce8ce842ed3ee41235e0ed37ea18dd4c431b78873ed"} Feb 21 07:05:13 crc kubenswrapper[4820]: I0221 07:05:13.364811 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e9a-account-create-update-55xqx" event={"ID":"8e8a463c-63a8-424f-a3ab-4e46390b8cca","Type":"ContainerStarted","Data":"d22b9aa89457fa4b9e71c82c996799b20fb0531c15a153ed6e3fab6ce7936c6a"} Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.377429 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c516-account-create-update-mxhpl" event={"ID":"b8063e5a-6b15-4855-9ae2-5fdcc912b472","Type":"ContainerStarted","Data":"bc7f6f9a5d58d38241bb23918ec3d5506b30cfd767c5cd57651093052cf537b1"} Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.380751 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w9fxb" event={"ID":"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b","Type":"ContainerStarted","Data":"b1e2e56563934ebad235ed2f0f20504c79930fcb47caf9e4bfbd0d1d3a55fe60"} Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.383621 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e9a-account-create-update-55xqx" event={"ID":"8e8a463c-63a8-424f-a3ab-4e46390b8cca","Type":"ContainerStarted","Data":"9d429a4b3a6200dfae121b729b1359e79321fa7e7717f43e19aff11a7955b313"} Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.385818 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6976-account-create-update-mzpt2" event={"ID":"d69a9369-affe-4441-bf33-3c0f13540875","Type":"ContainerStarted","Data":"03c548c811acb4c242acaed906047e9cc39adbaca7c520712de29f84928072c8"} Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.387693 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lnssq" event={"ID":"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44","Type":"ContainerStarted","Data":"b05b0ffeced626b46e5a3d7acf041143c5dda7c4d6e96829cd77f955d68928e3"} Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.392785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jng5b" event={"ID":"4b9dd869-f673-4077-b345-05b4e79eb590","Type":"ContainerStarted","Data":"2bf9bc350dca95c1ab5b9b95e84478c10894bc91f944ad95cd208ed56c827df0"} Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.398415 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c516-account-create-update-mxhpl" podStartSLOduration=3.398398265 podStartE2EDuration="3.398398265s" podCreationTimestamp="2026-02-21 07:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:14.392869524 +0000 UTC m=+1089.425953732" watchObservedRunningTime="2026-02-21 07:05:14.398398265 +0000 UTC m=+1089.431482463" Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.415813 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-lnssq" podStartSLOduration=3.41579041 podStartE2EDuration="3.41579041s" podCreationTimestamp="2026-02-21 07:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:14.412108739 +0000 UTC m=+1089.445192937" watchObservedRunningTime="2026-02-21 07:05:14.41579041 +0000 UTC m=+1089.448874608" Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.427864 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-jng5b" podStartSLOduration=3.427847148 podStartE2EDuration="3.427847148s" podCreationTimestamp="2026-02-21 07:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:14.426335707 +0000 UTC m=+1089.459419905" watchObservedRunningTime="2026-02-21 07:05:14.427847148 +0000 UTC m=+1089.460931346" Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.444793 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-4e9a-account-create-update-55xqx" podStartSLOduration=3.444767539 podStartE2EDuration="3.444767539s" podCreationTimestamp="2026-02-21 07:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:14.437129132 +0000 UTC m=+1089.470213330" watchObservedRunningTime="2026-02-21 07:05:14.444767539 +0000 UTC m=+1089.477851737" Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.459474 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6976-account-create-update-mzpt2" podStartSLOduration=3.45945045 podStartE2EDuration="3.45945045s" podCreationTimestamp="2026-02-21 07:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:14.452670865 +0000 UTC m=+1089.485755063" watchObservedRunningTime="2026-02-21 07:05:14.45945045 +0000 UTC m=+1089.492534648" Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.475830 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-w9fxb" podStartSLOduration=3.475807715 podStartE2EDuration="3.475807715s" podCreationTimestamp="2026-02-21 07:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:14.470698697 +0000 UTC m=+1089.503782895" watchObservedRunningTime="2026-02-21 07:05:14.475807715 +0000 UTC m=+1089.508891913" Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.402547 4820 generic.go:334] "Generic (PLEG): container finished" podID="8e8a463c-63a8-424f-a3ab-4e46390b8cca" containerID="9d429a4b3a6200dfae121b729b1359e79321fa7e7717f43e19aff11a7955b313" exitCode=0 Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.402766 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e9a-account-create-update-55xqx" event={"ID":"8e8a463c-63a8-424f-a3ab-4e46390b8cca","Type":"ContainerDied","Data":"9d429a4b3a6200dfae121b729b1359e79321fa7e7717f43e19aff11a7955b313"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.413196 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.413253 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.413264 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.419196 4820 generic.go:334] "Generic (PLEG): container finished" podID="d69a9369-affe-4441-bf33-3c0f13540875" containerID="03c548c811acb4c242acaed906047e9cc39adbaca7c520712de29f84928072c8" exitCode=0 Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.419286 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6976-account-create-update-mzpt2" event={"ID":"d69a9369-affe-4441-bf33-3c0f13540875","Type":"ContainerDied","Data":"03c548c811acb4c242acaed906047e9cc39adbaca7c520712de29f84928072c8"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.431446 4820 generic.go:334] "Generic (PLEG): container finished" podID="3b1b4a37-bb80-4c59-acdc-b6490c6e6c44" containerID="b05b0ffeced626b46e5a3d7acf041143c5dda7c4d6e96829cd77f955d68928e3" exitCode=0 Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.431653 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lnssq" event={"ID":"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44","Type":"ContainerDied","Data":"b05b0ffeced626b46e5a3d7acf041143c5dda7c4d6e96829cd77f955d68928e3"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.437103 4820 generic.go:334] "Generic (PLEG): container finished" podID="4b9dd869-f673-4077-b345-05b4e79eb590" containerID="2bf9bc350dca95c1ab5b9b95e84478c10894bc91f944ad95cd208ed56c827df0" exitCode=0 Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.437185 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jng5b" event={"ID":"4b9dd869-f673-4077-b345-05b4e79eb590","Type":"ContainerDied","Data":"2bf9bc350dca95c1ab5b9b95e84478c10894bc91f944ad95cd208ed56c827df0"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.439594 4820 generic.go:334] "Generic (PLEG): container finished" podID="b8063e5a-6b15-4855-9ae2-5fdcc912b472" containerID="bc7f6f9a5d58d38241bb23918ec3d5506b30cfd767c5cd57651093052cf537b1" exitCode=0 Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.439653 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c516-account-create-update-mxhpl" event={"ID":"b8063e5a-6b15-4855-9ae2-5fdcc912b472","Type":"ContainerDied","Data":"bc7f6f9a5d58d38241bb23918ec3d5506b30cfd767c5cd57651093052cf537b1"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.440620 4820 generic.go:334] "Generic (PLEG): container finished" podID="10d96043-ca9d-4dd0-aa3e-8bcd5941a97b" containerID="b1e2e56563934ebad235ed2f0f20504c79930fcb47caf9e4bfbd0d1d3a55fe60" exitCode=0 Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.440650 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w9fxb" event={"ID":"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b","Type":"ContainerDied","Data":"b1e2e56563934ebad235ed2f0f20504c79930fcb47caf9e4bfbd0d1d3a55fe60"} Feb 21 07:05:16 crc kubenswrapper[4820]: I0221 07:05:16.454869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612"} Feb 21 07:05:16 crc kubenswrapper[4820]: I0221 07:05:16.456927 4820 generic.go:334] "Generic (PLEG): container finished" podID="e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" containerID="3687cb41be17e324f8d8ae7287b8149bf97802e24e08623475454682c9f421e8" exitCode=0 Feb 21 07:05:16 crc kubenswrapper[4820]: I0221 07:05:16.457015 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5knjn" event={"ID":"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc","Type":"ContainerDied","Data":"3687cb41be17e324f8d8ae7287b8149bf97802e24e08623475454682c9f421e8"} Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.830319 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.837220 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.996297 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-operator-scripts\") pod \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.996406 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrds8\" (UniqueName: \"kubernetes.io/projected/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-kube-api-access-hrds8\") pod \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.996472 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqphh\" (UniqueName: \"kubernetes.io/projected/b8063e5a-6b15-4855-9ae2-5fdcc912b472-kube-api-access-cqphh\") pod \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.996515 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8063e5a-6b15-4855-9ae2-5fdcc912b472-operator-scripts\") pod \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.997142 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b1b4a37-bb80-4c59-acdc-b6490c6e6c44" (UID: "3b1b4a37-bb80-4c59-acdc-b6490c6e6c44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.997247 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8063e5a-6b15-4855-9ae2-5fdcc912b472-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8063e5a-6b15-4855-9ae2-5fdcc912b472" (UID: "b8063e5a-6b15-4855-9ae2-5fdcc912b472"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.002285 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8063e5a-6b15-4855-9ae2-5fdcc912b472-kube-api-access-cqphh" (OuterVolumeSpecName: "kube-api-access-cqphh") pod "b8063e5a-6b15-4855-9ae2-5fdcc912b472" (UID: "b8063e5a-6b15-4855-9ae2-5fdcc912b472"). InnerVolumeSpecName "kube-api-access-cqphh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.008456 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-kube-api-access-hrds8" (OuterVolumeSpecName: "kube-api-access-hrds8") pod "3b1b4a37-bb80-4c59-acdc-b6490c6e6c44" (UID: "3b1b4a37-bb80-4c59-acdc-b6490c6e6c44"). InnerVolumeSpecName "kube-api-access-hrds8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.098219 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrds8\" (UniqueName: \"kubernetes.io/projected/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-kube-api-access-hrds8\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.098261 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqphh\" (UniqueName: \"kubernetes.io/projected/b8063e5a-6b15-4855-9ae2-5fdcc912b472-kube-api-access-cqphh\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.098273 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8063e5a-6b15-4855-9ae2-5fdcc912b472-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.098281 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.131151 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.142549 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.158331 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.174180 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.203862 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5knjn" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.300937 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rptw6\" (UniqueName: \"kubernetes.io/projected/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-kube-api-access-rptw6\") pod \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301033 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69a9369-affe-4441-bf33-3c0f13540875-operator-scripts\") pod \"d69a9369-affe-4441-bf33-3c0f13540875\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301060 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-db-sync-config-data\") pod \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301114 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8a463c-63a8-424f-a3ab-4e46390b8cca-operator-scripts\") pod \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301205 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbwdq\" (UniqueName: \"kubernetes.io/projected/d69a9369-affe-4441-bf33-3c0f13540875-kube-api-access-fbwdq\") pod \"d69a9369-affe-4441-bf33-3c0f13540875\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301229 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-config-data\") pod \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301268 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9dd869-f673-4077-b345-05b4e79eb590-operator-scripts\") pod \"4b9dd869-f673-4077-b345-05b4e79eb590\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301413 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccfxq\" (UniqueName: \"kubernetes.io/projected/8e8a463c-63a8-424f-a3ab-4e46390b8cca-kube-api-access-ccfxq\") pod \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301522 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-combined-ca-bundle\") pod \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301621 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg8hh\" (UniqueName: \"kubernetes.io/projected/4b9dd869-f673-4077-b345-05b4e79eb590-kube-api-access-sg8hh\") pod \"4b9dd869-f673-4077-b345-05b4e79eb590\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301713 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8a463c-63a8-424f-a3ab-4e46390b8cca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e8a463c-63a8-424f-a3ab-4e46390b8cca" (UID: "8e8a463c-63a8-424f-a3ab-4e46390b8cca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301760 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-operator-scripts\") pod \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.302021 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jsg5\" (UniqueName: \"kubernetes.io/projected/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-kube-api-access-2jsg5\") pod \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.302223 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10d96043-ca9d-4dd0-aa3e-8bcd5941a97b" (UID: "10d96043-ca9d-4dd0-aa3e-8bcd5941a97b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.302553 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8a463c-63a8-424f-a3ab-4e46390b8cca-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.302578 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.302945 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b9dd869-f673-4077-b345-05b4e79eb590-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b9dd869-f673-4077-b345-05b4e79eb590" (UID: "4b9dd869-f673-4077-b345-05b4e79eb590"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.303098 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d69a9369-affe-4441-bf33-3c0f13540875-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d69a9369-affe-4441-bf33-3c0f13540875" (UID: "d69a9369-affe-4441-bf33-3c0f13540875"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.304366 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-kube-api-access-rptw6" (OuterVolumeSpecName: "kube-api-access-rptw6") pod "10d96043-ca9d-4dd0-aa3e-8bcd5941a97b" (UID: "10d96043-ca9d-4dd0-aa3e-8bcd5941a97b"). InnerVolumeSpecName "kube-api-access-rptw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.307262 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69a9369-affe-4441-bf33-3c0f13540875-kube-api-access-fbwdq" (OuterVolumeSpecName: "kube-api-access-fbwdq") pod "d69a9369-affe-4441-bf33-3c0f13540875" (UID: "d69a9369-affe-4441-bf33-3c0f13540875"). InnerVolumeSpecName "kube-api-access-fbwdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.307547 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8a463c-63a8-424f-a3ab-4e46390b8cca-kube-api-access-ccfxq" (OuterVolumeSpecName: "kube-api-access-ccfxq") pod "8e8a463c-63a8-424f-a3ab-4e46390b8cca" (UID: "8e8a463c-63a8-424f-a3ab-4e46390b8cca"). InnerVolumeSpecName "kube-api-access-ccfxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.307703 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" (UID: "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.307725 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-kube-api-access-2jsg5" (OuterVolumeSpecName: "kube-api-access-2jsg5") pod "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" (UID: "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc"). InnerVolumeSpecName "kube-api-access-2jsg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.309575 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b9dd869-f673-4077-b345-05b4e79eb590-kube-api-access-sg8hh" (OuterVolumeSpecName: "kube-api-access-sg8hh") pod "4b9dd869-f673-4077-b345-05b4e79eb590" (UID: "4b9dd869-f673-4077-b345-05b4e79eb590"). InnerVolumeSpecName "kube-api-access-sg8hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.327905 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" (UID: "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.344801 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-config-data" (OuterVolumeSpecName: "config-data") pod "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" (UID: "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404007 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69a9369-affe-4441-bf33-3c0f13540875-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404053 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404069 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbwdq\" (UniqueName: \"kubernetes.io/projected/d69a9369-affe-4441-bf33-3c0f13540875-kube-api-access-fbwdq\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404104 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9dd869-f673-4077-b345-05b4e79eb590-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404119 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404132 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccfxq\" (UniqueName: \"kubernetes.io/projected/8e8a463c-63a8-424f-a3ab-4e46390b8cca-kube-api-access-ccfxq\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404145 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404159 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg8hh\" (UniqueName: \"kubernetes.io/projected/4b9dd869-f673-4077-b345-05b4e79eb590-kube-api-access-sg8hh\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404172 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jsg5\" (UniqueName: \"kubernetes.io/projected/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-kube-api-access-2jsg5\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404183 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rptw6\" (UniqueName: \"kubernetes.io/projected/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-kube-api-access-rptw6\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.481717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jng5b" event={"ID":"4b9dd869-f673-4077-b345-05b4e79eb590","Type":"ContainerDied","Data":"ad651295e6790de81b4d37f5e6c8931c1d983fa719ba0c2e89abe917c16fa032"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.481753 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.481775 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad651295e6790de81b4d37f5e6c8931c1d983fa719ba0c2e89abe917c16fa032" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.483025 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e9a-account-create-update-55xqx" event={"ID":"8e8a463c-63a8-424f-a3ab-4e46390b8cca","Type":"ContainerDied","Data":"d22b9aa89457fa4b9e71c82c996799b20fb0531c15a153ed6e3fab6ce7936c6a"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.483050 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d22b9aa89457fa4b9e71c82c996799b20fb0531c15a153ed6e3fab6ce7936c6a" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.483115 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.490047 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6976-account-create-update-mzpt2" event={"ID":"d69a9369-affe-4441-bf33-3c0f13540875","Type":"ContainerDied","Data":"83aa7c0bc22c00737da59e958e3d2d3b4c976a72a172bae6bb7e159573f091de"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.490089 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83aa7c0bc22c00737da59e958e3d2d3b4c976a72a172bae6bb7e159573f091de" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.490152 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.497510 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-68q2w" event={"ID":"f26b24bc-e904-49a1-b2bc-d140b0032b83","Type":"ContainerStarted","Data":"ced644e0ce17e36b8fc26dcef8bef247a0ca698d43783b8feefdf41c4c74cc3d"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.499868 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c516-account-create-update-mxhpl" event={"ID":"b8063e5a-6b15-4855-9ae2-5fdcc912b472","Type":"ContainerDied","Data":"ef5ebc3b873484840c7f045ff493a6f409c31830e26105733a9f04a8cdc8523f"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.499893 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef5ebc3b873484840c7f045ff493a6f409c31830e26105733a9f04a8cdc8523f" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.499937 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.505964 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.505996 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w9fxb" event={"ID":"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b","Type":"ContainerDied","Data":"7bc6a983dfe97f711ff91ce8ce842ed3ee41235e0ed37ea18dd4c431b78873ed"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.506035 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bc6a983dfe97f711ff91ce8ce842ed3ee41235e0ed37ea18dd4c431b78873ed" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.507905 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5knjn" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.507923 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5knjn" event={"ID":"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc","Type":"ContainerDied","Data":"06c7c445d64ced196c5da3af11e304c1072522569a7cfbf0d406157ab3cc8687"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.507948 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06c7c445d64ced196c5da3af11e304c1072522569a7cfbf0d406157ab3cc8687" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.516663 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.516706 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.518494 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lnssq" event={"ID":"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44","Type":"ContainerDied","Data":"1fbc501023d995cbd34384639ed9732eac99f54df56741ae8eaa92c36c83d1f7"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.518541 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fbc501023d995cbd34384639ed9732eac99f54df56741ae8eaa92c36c83d1f7" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.518583 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.538871 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-68q2w" podStartSLOduration=1.8546603560000001 podStartE2EDuration="8.538850811s" podCreationTimestamp="2026-02-21 07:05:11 +0000 UTC" firstStartedPulling="2026-02-21 07:05:12.400556344 +0000 UTC m=+1087.433640542" lastFinishedPulling="2026-02-21 07:05:19.084746789 +0000 UTC m=+1094.117830997" observedRunningTime="2026-02-21 07:05:19.515396751 +0000 UTC m=+1094.548480969" watchObservedRunningTime="2026-02-21 07:05:19.538850811 +0000 UTC m=+1094.571935009" Feb 21 07:05:19 crc kubenswrapper[4820]: E0221 07:05:19.547015 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b9dd869_f673_4077_b345_05b4e79eb590.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd69a9369_affe_4441_bf33_3c0f13540875.slice\": RecentStats: unable to find data in memory cache]" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.532336 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895"} Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.532591 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91"} Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.532606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02"} Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.613560 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-sfwv9"] Feb 21 07:05:20 crc kubenswrapper[4820]: E0221 07:05:20.613891 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9dd869-f673-4077-b345-05b4e79eb590" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.613908 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9dd869-f673-4077-b345-05b4e79eb590" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: E0221 07:05:20.613916 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69a9369-affe-4441-bf33-3c0f13540875" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.613923 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69a9369-affe-4441-bf33-3c0f13540875" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: E0221 07:05:20.613938 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1b4a37-bb80-4c59-acdc-b6490c6e6c44" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.613944 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1b4a37-bb80-4c59-acdc-b6490c6e6c44" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: E0221 07:05:20.613955 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d96043-ca9d-4dd0-aa3e-8bcd5941a97b" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.613961 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d96043-ca9d-4dd0-aa3e-8bcd5941a97b" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: E0221 07:05:20.613974 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8063e5a-6b15-4855-9ae2-5fdcc912b472" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.613980 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8063e5a-6b15-4855-9ae2-5fdcc912b472" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: E0221 07:05:20.613989 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" containerName="glance-db-sync" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.613996 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" containerName="glance-db-sync" Feb 21 07:05:20 crc kubenswrapper[4820]: E0221 07:05:20.614004 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8a463c-63a8-424f-a3ab-4e46390b8cca" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614011 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8a463c-63a8-424f-a3ab-4e46390b8cca" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614144 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="10d96043-ca9d-4dd0-aa3e-8bcd5941a97b" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614153 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" containerName="glance-db-sync" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614161 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69a9369-affe-4441-bf33-3c0f13540875" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614172 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1b4a37-bb80-4c59-acdc-b6490c6e6c44" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614186 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8063e5a-6b15-4855-9ae2-5fdcc912b472" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614195 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8a463c-63a8-424f-a3ab-4e46390b8cca" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614207 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b9dd869-f673-4077-b345-05b4e79eb590" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.615012 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.646316 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-sfwv9"] Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.731075 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-sb\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.731146 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8rh\" (UniqueName: \"kubernetes.io/projected/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-kube-api-access-tb8rh\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.731179 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-config\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.731210 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-nb\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.731293 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-dns-svc\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.832294 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-sb\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.832648 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8rh\" (UniqueName: \"kubernetes.io/projected/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-kube-api-access-tb8rh\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.832680 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-config\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.832714 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-nb\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.832807 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-dns-svc\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.833201 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-sb\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.833748 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-dns-svc\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.833785 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-nb\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.834331 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-config\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.853434 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8rh\" (UniqueName: \"kubernetes.io/projected/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-kube-api-access-tb8rh\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.940610 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.416915 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-sfwv9"] Feb 21 07:05:21 crc kubenswrapper[4820]: W0221 07:05:21.423756 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bf146cf_2a59_4e4c_8b3b_cd34b40ac463.slice/crio-9fd3d172009945d2420a3030e3c7ac306b91895bf7a590581764b1194d1230c4 WatchSource:0}: Error finding container 9fd3d172009945d2420a3030e3c7ac306b91895bf7a590581764b1194d1230c4: Status 404 returned error can't find the container with id 9fd3d172009945d2420a3030e3c7ac306b91895bf7a590581764b1194d1230c4 Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.540260 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" event={"ID":"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463","Type":"ContainerStarted","Data":"9fd3d172009945d2420a3030e3c7ac306b91895bf7a590581764b1194d1230c4"} Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.545680 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e"} Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.545825 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf"} Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.591038 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.835195834 podStartE2EDuration="47.591018943s" podCreationTimestamp="2026-02-21 07:04:34 +0000 UTC" firstStartedPulling="2026-02-21 07:05:08.328386846 +0000 UTC m=+1083.361471044" lastFinishedPulling="2026-02-21 07:05:19.084209955 +0000 UTC m=+1094.117294153" observedRunningTime="2026-02-21 07:05:21.579830538 +0000 UTC m=+1096.612914756" watchObservedRunningTime="2026-02-21 07:05:21.591018943 +0000 UTC m=+1096.624103141" Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.879155 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-sfwv9"] Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.936197 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-ngmlj"] Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.937497 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.939259 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.955374 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-ngmlj"] Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.057081 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-nb\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.057189 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-config\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.057222 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-sb\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.057275 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gjfw\" (UniqueName: \"kubernetes.io/projected/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-kube-api-access-9gjfw\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.057335 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-swift-storage-0\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.057389 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-svc\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.158355 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-swift-storage-0\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.158438 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-svc\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.158473 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-nb\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.158528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-config\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.158549 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-sb\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.158570 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gjfw\" (UniqueName: \"kubernetes.io/projected/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-kube-api-access-9gjfw\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.159615 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-swift-storage-0\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.159709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-svc\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.160371 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-config\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.160378 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-sb\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.160467 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-nb\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.178171 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gjfw\" (UniqueName: \"kubernetes.io/projected/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-kube-api-access-9gjfw\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.280426 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.555034 4820 generic.go:334] "Generic (PLEG): container finished" podID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerID="2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1" exitCode=0 Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.555126 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" event={"ID":"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463","Type":"ContainerDied","Data":"2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1"} Feb 21 07:05:22 crc kubenswrapper[4820]: W0221 07:05:22.698983 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf122b6d8_f1d8_49ff_9056_d1c1cfd1ff5f.slice/crio-a723fddcd9d1993fd8a4e14435b41cd1df614738a2273b0336bdb8655a15c017 WatchSource:0}: Error finding container a723fddcd9d1993fd8a4e14435b41cd1df614738a2273b0336bdb8655a15c017: Status 404 returned error can't find the container with id a723fddcd9d1993fd8a4e14435b41cd1df614738a2273b0336bdb8655a15c017 Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.707631 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-ngmlj"] Feb 21 07:05:23 crc kubenswrapper[4820]: I0221 07:05:23.563986 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" event={"ID":"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463","Type":"ContainerStarted","Data":"f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755"} Feb 21 07:05:23 crc kubenswrapper[4820]: I0221 07:05:23.564479 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" podUID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerName="dnsmasq-dns" containerID="cri-o://f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755" gracePeriod=10 Feb 21 07:05:23 crc kubenswrapper[4820]: I0221 07:05:23.564776 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:23 crc kubenswrapper[4820]: I0221 07:05:23.566658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" event={"ID":"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f","Type":"ContainerDied","Data":"04ebef274a53b73c45ecce50762acad1cf35b7e93a1502414bf9f32f7aa71d85"} Feb 21 07:05:23 crc kubenswrapper[4820]: I0221 07:05:23.566554 4820 generic.go:334] "Generic (PLEG): container finished" podID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerID="04ebef274a53b73c45ecce50762acad1cf35b7e93a1502414bf9f32f7aa71d85" exitCode=0 Feb 21 07:05:23 crc kubenswrapper[4820]: I0221 07:05:23.566867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" event={"ID":"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f","Type":"ContainerStarted","Data":"a723fddcd9d1993fd8a4e14435b41cd1df614738a2273b0336bdb8655a15c017"} Feb 21 07:05:23 crc kubenswrapper[4820]: I0221 07:05:23.596369 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" podStartSLOduration=3.596347369 podStartE2EDuration="3.596347369s" podCreationTimestamp="2026-02-21 07:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:23.588529056 +0000 UTC m=+1098.621613274" watchObservedRunningTime="2026-02-21 07:05:23.596347369 +0000 UTC m=+1098.629431567" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.044119 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.191870 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-sb\") pod \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.191920 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-dns-svc\") pod \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.191982 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-config\") pod \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.192014 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb8rh\" (UniqueName: \"kubernetes.io/projected/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-kube-api-access-tb8rh\") pod \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.192029 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-nb\") pod \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.196407 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-kube-api-access-tb8rh" (OuterVolumeSpecName: "kube-api-access-tb8rh") pod "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" (UID: "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463"). InnerVolumeSpecName "kube-api-access-tb8rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.230163 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" (UID: "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.232311 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" (UID: "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.233700 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" (UID: "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.238264 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-config" (OuterVolumeSpecName: "config") pod "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" (UID: "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.294750 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.294963 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.294981 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.294993 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb8rh\" (UniqueName: \"kubernetes.io/projected/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-kube-api-access-tb8rh\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.295010 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.575703 4820 generic.go:334] "Generic (PLEG): container finished" podID="f26b24bc-e904-49a1-b2bc-d140b0032b83" containerID="ced644e0ce17e36b8fc26dcef8bef247a0ca698d43783b8feefdf41c4c74cc3d" exitCode=0 Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.575788 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-68q2w" event={"ID":"f26b24bc-e904-49a1-b2bc-d140b0032b83","Type":"ContainerDied","Data":"ced644e0ce17e36b8fc26dcef8bef247a0ca698d43783b8feefdf41c4c74cc3d"} Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.578903 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" event={"ID":"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f","Type":"ContainerStarted","Data":"e5f2c82bb99b70af0764eb67db8fcabcdc99c272f220452df12430ff599ced83"} Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.579007 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.580617 4820 generic.go:334] "Generic (PLEG): container finished" podID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerID="f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755" exitCode=0 Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.580640 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" event={"ID":"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463","Type":"ContainerDied","Data":"f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755"} Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.580668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" event={"ID":"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463","Type":"ContainerDied","Data":"9fd3d172009945d2420a3030e3c7ac306b91895bf7a590581764b1194d1230c4"} Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.580658 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.580688 4820 scope.go:117] "RemoveContainer" containerID="f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.616136 4820 scope.go:117] "RemoveContainer" containerID="2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.630525 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" podStartSLOduration=3.630494215 podStartE2EDuration="3.630494215s" podCreationTimestamp="2026-02-21 07:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:24.612569306 +0000 UTC m=+1099.645653504" watchObservedRunningTime="2026-02-21 07:05:24.630494215 +0000 UTC m=+1099.663578413" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.642982 4820 scope.go:117] "RemoveContainer" containerID="f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755" Feb 21 07:05:25 crc kubenswrapper[4820]: E0221 07:05:24.643542 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755\": container with ID starting with f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755 not found: ID does not exist" containerID="f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.643595 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755"} err="failed to get container status \"f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755\": rpc error: code = NotFound desc = could not find container \"f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755\": container with ID starting with f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755 not found: ID does not exist" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.643627 4820 scope.go:117] "RemoveContainer" containerID="2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.643989 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-sfwv9"] Feb 21 07:05:25 crc kubenswrapper[4820]: E0221 07:05:24.644076 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1\": container with ID starting with 2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1 not found: ID does not exist" containerID="2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.644106 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1"} err="failed to get container status \"2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1\": rpc error: code = NotFound desc = could not find container \"2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1\": container with ID starting with 2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1 not found: ID does not exist" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.654069 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-sfwv9"] Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:25.710412 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" path="/var/lib/kubelet/pods/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463/volumes" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:25.913833 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.022963 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fllw8\" (UniqueName: \"kubernetes.io/projected/f26b24bc-e904-49a1-b2bc-d140b0032b83-kube-api-access-fllw8\") pod \"f26b24bc-e904-49a1-b2bc-d140b0032b83\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.023028 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-config-data\") pod \"f26b24bc-e904-49a1-b2bc-d140b0032b83\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.023078 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-combined-ca-bundle\") pod \"f26b24bc-e904-49a1-b2bc-d140b0032b83\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.028675 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26b24bc-e904-49a1-b2bc-d140b0032b83-kube-api-access-fllw8" (OuterVolumeSpecName: "kube-api-access-fllw8") pod "f26b24bc-e904-49a1-b2bc-d140b0032b83" (UID: "f26b24bc-e904-49a1-b2bc-d140b0032b83"). InnerVolumeSpecName "kube-api-access-fllw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.045307 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f26b24bc-e904-49a1-b2bc-d140b0032b83" (UID: "f26b24bc-e904-49a1-b2bc-d140b0032b83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.067136 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-config-data" (OuterVolumeSpecName: "config-data") pod "f26b24bc-e904-49a1-b2bc-d140b0032b83" (UID: "f26b24bc-e904-49a1-b2bc-d140b0032b83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.125637 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.125681 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fllw8\" (UniqueName: \"kubernetes.io/projected/f26b24bc-e904-49a1-b2bc-d140b0032b83-kube-api-access-fllw8\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.125696 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.607857 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-68q2w" event={"ID":"f26b24bc-e904-49a1-b2bc-d140b0032b83","Type":"ContainerDied","Data":"7e24aafd254628e994215fa2904318870dcc9a4e70c8b809fc232e12c6faf545"} Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.607920 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e24aafd254628e994215fa2904318870dcc9a4e70c8b809fc232e12c6faf545" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.607937 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.854433 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-ngmlj"] Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.855021 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" podUID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerName="dnsmasq-dns" containerID="cri-o://e5f2c82bb99b70af0764eb67db8fcabcdc99c272f220452df12430ff599ced83" gracePeriod=10 Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.895478 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-clkkr"] Feb 21 07:05:26 crc kubenswrapper[4820]: E0221 07:05:26.895955 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26b24bc-e904-49a1-b2bc-d140b0032b83" containerName="keystone-db-sync" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.895973 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26b24bc-e904-49a1-b2bc-d140b0032b83" containerName="keystone-db-sync" Feb 21 07:05:26 crc kubenswrapper[4820]: E0221 07:05:26.895994 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerName="dnsmasq-dns" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.896002 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerName="dnsmasq-dns" Feb 21 07:05:26 crc kubenswrapper[4820]: E0221 07:05:26.896029 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerName="init" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.896037 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerName="init" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.896277 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26b24bc-e904-49a1-b2bc-d140b0032b83" containerName="keystone-db-sync" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.896307 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerName="dnsmasq-dns" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.897019 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.901615 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.901733 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.901755 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.901850 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.908769 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w79dl" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.909979 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-bh6wx"] Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.911679 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.925394 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-clkkr"] Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.938860 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-bh6wx"] Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941583 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-fernet-keys\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941645 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-svc\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941675 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4ts2\" (UniqueName: \"kubernetes.io/projected/aece5dfd-5954-404f-b713-4fe36b649ce9-kube-api-access-z4ts2\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941695 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-scripts\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941735 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vhx8\" (UniqueName: \"kubernetes.io/projected/52535b6c-a2fa-41da-aeea-143da861244d-kube-api-access-8vhx8\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941753 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-sb\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-nb\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941793 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-credential-keys\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941815 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-swift-storage-0\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941835 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-config-data\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941860 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-config\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941903 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-combined-ca-bundle\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.043186 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-nb\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.043454 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-credential-keys\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.043581 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-swift-storage-0\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.043713 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-config-data\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.043824 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-config\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.043989 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-combined-ca-bundle\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.044106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-fernet-keys\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.044223 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-svc\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.044386 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4ts2\" (UniqueName: \"kubernetes.io/projected/aece5dfd-5954-404f-b713-4fe36b649ce9-kube-api-access-z4ts2\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.044481 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-scripts\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.044628 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vhx8\" (UniqueName: \"kubernetes.io/projected/52535b6c-a2fa-41da-aeea-143da861244d-kube-api-access-8vhx8\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.044718 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-sb\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.045481 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-svc\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.047461 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-config\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.048287 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-nb\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.048853 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-sb\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.053889 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-swift-storage-0\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.058827 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-scripts\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.059034 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-credential-keys\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.059069 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-config-data\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.066212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-combined-ca-bundle\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.071940 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vhx8\" (UniqueName: \"kubernetes.io/projected/52535b6c-a2fa-41da-aeea-143da861244d-kube-api-access-8vhx8\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.072412 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-fernet-keys\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.101101 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4ts2\" (UniqueName: \"kubernetes.io/projected/aece5dfd-5954-404f-b713-4fe36b649ce9-kube-api-access-z4ts2\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.149448 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-vfn4b"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.150668 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.154068 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.154328 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.155064 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mmvl6" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.182624 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-lj8d2"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.183648 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.189730 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.189946 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qfdgf" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.194267 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.234308 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lj8d2"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.248388 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-smnkd"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249212 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-combined-ca-bundle\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249286 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-scripts\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249316 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-config\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249334 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-config-data\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249380 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-combined-ca-bundle\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249409 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2l4g\" (UniqueName: \"kubernetes.io/projected/b400c916-2ba9-4d7e-b9f5-6044605f279c-kube-api-access-d2l4g\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249437 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249439 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-db-sync-config-data\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249505 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b400c916-2ba9-4d7e-b9f5-6044605f279c-etc-machine-id\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249534 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjnzq\" (UniqueName: \"kubernetes.io/projected/085b95c8-2602-461b-8a08-91aff75f97a0-kube-api-access-vjnzq\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.252520 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zjnng" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.253021 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.263348 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-wdvf7"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.264457 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.272033 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-smnkd"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.277089 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-p47r7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.277358 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.277897 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.277982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.280961 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vfn4b"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.314744 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.334983 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wdvf7"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353319 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-db-sync-config-data\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353363 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b400c916-2ba9-4d7e-b9f5-6044605f279c-etc-machine-id\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353399 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjnzq\" (UniqueName: \"kubernetes.io/projected/085b95c8-2602-461b-8a08-91aff75f97a0-kube-api-access-vjnzq\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353428 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-scripts\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353456 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-config-data\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svdtb\" (UniqueName: \"kubernetes.io/projected/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-kube-api-access-svdtb\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353496 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-combined-ca-bundle\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353518 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-combined-ca-bundle\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353549 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-scripts\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353575 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-config\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353594 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-config-data\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353620 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-logs\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353641 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-combined-ca-bundle\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-combined-ca-bundle\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353684 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcn9d\" (UniqueName: \"kubernetes.io/projected/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-kube-api-access-tcn9d\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353701 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-db-sync-config-data\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353724 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2l4g\" (UniqueName: \"kubernetes.io/projected/b400c916-2ba9-4d7e-b9f5-6044605f279c-kube-api-access-d2l4g\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.362508 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b400c916-2ba9-4d7e-b9f5-6044605f279c-etc-machine-id\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.363700 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-db-sync-config-data\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.364047 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-config\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.364839 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-config-data\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.365064 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-combined-ca-bundle\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.371912 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2l4g\" (UniqueName: \"kubernetes.io/projected/b400c916-2ba9-4d7e-b9f5-6044605f279c-kube-api-access-d2l4g\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.373861 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-combined-ca-bundle\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.377917 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-scripts\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.377994 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-bh6wx"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.388013 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjnzq\" (UniqueName: \"kubernetes.io/projected/085b95c8-2602-461b-8a08-91aff75f97a0-kube-api-access-vjnzq\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.422984 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.431162 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.433696 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.433935 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.445376 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.455924 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcn9d\" (UniqueName: \"kubernetes.io/projected/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-kube-api-access-tcn9d\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456001 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-db-sync-config-data\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456057 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-scripts\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456081 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-log-httpd\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456117 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-config-data\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456139 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-scripts\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456167 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-config-data\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456188 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svdtb\" (UniqueName: \"kubernetes.io/projected/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-kube-api-access-svdtb\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456218 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-combined-ca-bundle\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456345 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456372 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-run-httpd\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456394 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-logs\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456422 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-combined-ca-bundle\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456447 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vr9w\" (UniqueName: \"kubernetes.io/projected/a3cce54d-5f2a-4e51-864d-03e55b50d698-kube-api-access-9vr9w\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456467 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.461025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-logs\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.463759 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-config-data\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.464064 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-combined-ca-bundle\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.468830 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-db-sync-config-data\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.472891 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-dvmcz"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.472918 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-combined-ca-bundle\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.474199 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.474487 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-scripts\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.481107 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svdtb\" (UniqueName: \"kubernetes.io/projected/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-kube-api-access-svdtb\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.481141 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcn9d\" (UniqueName: \"kubernetes.io/projected/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-kube-api-access-tcn9d\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.500513 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.540689 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.541694 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-dvmcz"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.559025 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-scripts\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.559066 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-log-httpd\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.559108 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-config-data\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.559183 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.559210 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-run-httpd\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.559258 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vr9w\" (UniqueName: \"kubernetes.io/projected/a3cce54d-5f2a-4e51-864d-03e55b50d698-kube-api-access-9vr9w\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.559284 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.563914 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-run-httpd\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.565858 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-log-httpd\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.574979 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.576875 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.577788 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-config-data\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.581697 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-scripts\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.582040 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.591440 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vr9w\" (UniqueName: \"kubernetes.io/projected/a3cce54d-5f2a-4e51-864d-03e55b50d698-kube-api-access-9vr9w\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.600711 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.654031 4820 generic.go:334] "Generic (PLEG): container finished" podID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerID="e5f2c82bb99b70af0764eb67db8fcabcdc99c272f220452df12430ff599ced83" exitCode=0 Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.654067 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" event={"ID":"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f","Type":"ContainerDied","Data":"e5f2c82bb99b70af0764eb67db8fcabcdc99c272f220452df12430ff599ced83"} Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.667627 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-config\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.667681 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-svc\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.667701 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-swift-storage-0\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.667718 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-sb\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.667741 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbqjs\" (UniqueName: \"kubernetes.io/projected/375bfff4-76af-4f71-a665-c409feeb6f67-kube-api-access-mbqjs\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.667811 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-nb\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.694354 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.760747 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-bh6wx"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.773946 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-nb\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.774022 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-config\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.774061 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-svc\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.774082 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-swift-storage-0\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.774100 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-sb\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.774128 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbqjs\" (UniqueName: \"kubernetes.io/projected/375bfff4-76af-4f71-a665-c409feeb6f67-kube-api-access-mbqjs\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.775954 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-swift-storage-0\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.776963 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-svc\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.777971 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-nb\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.778591 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-config\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.781872 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.782420 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-sb\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.830809 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbqjs\" (UniqueName: \"kubernetes.io/projected/375bfff4-76af-4f71-a665-c409feeb6f67-kube-api-access-mbqjs\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: W0221 07:05:27.834787 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52535b6c_a2fa_41da_aeea_143da861244d.slice/crio-8a7c8e6f8a819fc83f3cf159fcee59f34fa83470772d092880bf4f279ed7b7c1 WatchSource:0}: Error finding container 8a7c8e6f8a819fc83f3cf159fcee59f34fa83470772d092880bf4f279ed7b7c1: Status 404 returned error can't find the container with id 8a7c8e6f8a819fc83f3cf159fcee59f34fa83470772d092880bf4f279ed7b7c1 Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.876608 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-svc\") pod \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.876977 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-sb\") pod \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.877013 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-swift-storage-0\") pod \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.877061 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-config\") pod \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.877087 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gjfw\" (UniqueName: \"kubernetes.io/projected/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-kube-api-access-9gjfw\") pod \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.877137 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-nb\") pod \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.897101 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-kube-api-access-9gjfw" (OuterVolumeSpecName: "kube-api-access-9gjfw") pod "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" (UID: "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f"). InnerVolumeSpecName "kube-api-access-9gjfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.952981 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" (UID: "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.964381 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" (UID: "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.977450 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" (UID: "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.980210 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gjfw\" (UniqueName: \"kubernetes.io/projected/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-kube-api-access-9gjfw\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.982019 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.982102 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.982158 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.984211 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-config" (OuterVolumeSpecName: "config") pod "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" (UID: "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.009689 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:28 crc kubenswrapper[4820]: E0221 07:05:28.010219 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerName="init" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.010277 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerName="init" Feb 21 07:05:28 crc kubenswrapper[4820]: E0221 07:05:28.010291 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerName="dnsmasq-dns" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.010300 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerName="dnsmasq-dns" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.010789 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerName="dnsmasq-dns" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.011917 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.014010 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bl7bk" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.014226 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.014263 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" (UID: "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.014912 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.014953 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.061704 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.079182 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.080795 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.085368 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.085676 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.092465 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.093306 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.093341 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.105107 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-clkkr"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.109712 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.195681 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196003 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196040 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-logs\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196076 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-scripts\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196099 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196147 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196175 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196228 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196268 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zxq6\" (UniqueName: \"kubernetes.io/projected/461dc704-1698-4a81-bb65-4009ee43495d-kube-api-access-4zxq6\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196293 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196331 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6r6g\" (UniqueName: \"kubernetes.io/projected/316968d3-d62b-4a31-b157-02f4a33cd175-kube-api-access-j6r6g\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196351 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-config-data\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196390 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196426 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196448 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-logs\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297650 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-scripts\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297710 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297779 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297808 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297865 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297910 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zxq6\" (UniqueName: \"kubernetes.io/projected/461dc704-1698-4a81-bb65-4009ee43495d-kube-api-access-4zxq6\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297940 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297983 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6r6g\" (UniqueName: \"kubernetes.io/projected/316968d3-d62b-4a31-b157-02f4a33cd175-kube-api-access-j6r6g\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298008 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-config-data\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298081 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298111 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-logs\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298148 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298181 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298220 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298270 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-logs\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298407 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298776 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-logs\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.299163 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.300206 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.301340 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.303452 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-config-data\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.303516 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-logs\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.303733 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-scripts\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.306229 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.306551 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.307832 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.308570 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.323342 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zxq6\" (UniqueName: \"kubernetes.io/projected/461dc704-1698-4a81-bb65-4009ee43495d-kube-api-access-4zxq6\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.323866 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.324095 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6r6g\" (UniqueName: \"kubernetes.io/projected/316968d3-d62b-4a31-b157-02f4a33cd175-kube-api-access-j6r6g\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.324356 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.347538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.349968 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.459229 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vfn4b"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.473025 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lj8d2"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.488965 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-smnkd"] Feb 21 07:05:28 crc kubenswrapper[4820]: W0221 07:05:28.507407 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb400c916_2ba9_4d7e_b9f5_6044605f279c.slice/crio-8825622824e2c5a26d801793ab024244254cf79018cd4389ed87a92f9a749c24 WatchSource:0}: Error finding container 8825622824e2c5a26d801793ab024244254cf79018cd4389ed87a92f9a749c24: Status 404 returned error can't find the container with id 8825622824e2c5a26d801793ab024244254cf79018cd4389ed87a92f9a749c24 Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.509805 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wdvf7"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.548720 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.590450 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.623569 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.682857 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-dvmcz"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.710194 4820 generic.go:334] "Generic (PLEG): container finished" podID="52535b6c-a2fa-41da-aeea-143da861244d" containerID="7c1847cdd2d5d8bbe97a6ee50c2aa4639920b7e7798a71e10c2f9806b4f4e40d" exitCode=0 Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.710279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" event={"ID":"52535b6c-a2fa-41da-aeea-143da861244d","Type":"ContainerDied","Data":"7c1847cdd2d5d8bbe97a6ee50c2aa4639920b7e7798a71e10c2f9806b4f4e40d"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.710307 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" event={"ID":"52535b6c-a2fa-41da-aeea-143da861244d","Type":"ContainerStarted","Data":"8a7c8e6f8a819fc83f3cf159fcee59f34fa83470772d092880bf4f279ed7b7c1"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.771487 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-smnkd" event={"ID":"f9b51414-aa8f-49ad-b662-b3c44eb0bc62","Type":"ContainerStarted","Data":"17a0db325762105ee3f17079844c6e2a58dff4258e4ae6c4099739f2cd6e0a2f"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.825615 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" event={"ID":"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f","Type":"ContainerDied","Data":"a723fddcd9d1993fd8a4e14435b41cd1df614738a2273b0336bdb8655a15c017"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.825679 4820 scope.go:117] "RemoveContainer" containerID="e5f2c82bb99b70af0764eb67db8fcabcdc99c272f220452df12430ff599ced83" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.825859 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.889868 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wdvf7" event={"ID":"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744","Type":"ContainerStarted","Data":"cc4fb96b39e1936b86af57f1db3fb5919410cadbcdd356f24dc36d2766a16bc7"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.931053 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vfn4b" event={"ID":"b400c916-2ba9-4d7e-b9f5-6044605f279c","Type":"ContainerStarted","Data":"8825622824e2c5a26d801793ab024244254cf79018cd4389ed87a92f9a749c24"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.938862 4820 scope.go:117] "RemoveContainer" containerID="04ebef274a53b73c45ecce50762acad1cf35b7e93a1502414bf9f32f7aa71d85" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.941089 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clkkr" event={"ID":"aece5dfd-5954-404f-b713-4fe36b649ce9","Type":"ContainerStarted","Data":"1f8b1fb2f69da036c688f31fb3679ae1f19a1bae47b10780c72a6f4de62dcb8b"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.941138 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clkkr" event={"ID":"aece5dfd-5954-404f-b713-4fe36b649ce9","Type":"ContainerStarted","Data":"0fb67de2b1978c0e372a77b846d58a92de58425b3a806773cc6a315c6efe11bf"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.967480 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lj8d2" event={"ID":"085b95c8-2602-461b-8a08-91aff75f97a0","Type":"ContainerStarted","Data":"ba77ea1a8e56334ddcc0c11ab2474c1a360646ae3121c5554dc3dabd168e0eca"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.970251 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-ngmlj"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.974622 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerStarted","Data":"0f8176927ad01d0eb54f7e8ca55f1bbe340ac767367622047b311589a963df40"} Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.003314 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-ngmlj"] Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.013156 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-clkkr" podStartSLOduration=3.013129818 podStartE2EDuration="3.013129818s" podCreationTimestamp="2026-02-21 07:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:28.987676434 +0000 UTC m=+1104.020760632" watchObservedRunningTime="2026-02-21 07:05:29.013129818 +0000 UTC m=+1104.046214016" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.034025 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-lj8d2" podStartSLOduration=2.034001747 podStartE2EDuration="2.034001747s" podCreationTimestamp="2026-02-21 07:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:29.007119015 +0000 UTC m=+1104.040203213" watchObservedRunningTime="2026-02-21 07:05:29.034001747 +0000 UTC m=+1104.067085945" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.291867 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.384626 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.466035 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:29 crc kubenswrapper[4820]: W0221 07:05:29.489499 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod461dc704_1698_4a81_bb65_4009ee43495d.slice/crio-d645e31fcb0bd8bc7a86806a34c23d7f268977e0bb1931333e7f6821ba2aef6d WatchSource:0}: Error finding container d645e31fcb0bd8bc7a86806a34c23d7f268977e0bb1931333e7f6821ba2aef6d: Status 404 returned error can't find the container with id d645e31fcb0bd8bc7a86806a34c23d7f268977e0bb1931333e7f6821ba2aef6d Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.553530 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.561566 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vhx8\" (UniqueName: \"kubernetes.io/projected/52535b6c-a2fa-41da-aeea-143da861244d-kube-api-access-8vhx8\") pod \"52535b6c-a2fa-41da-aeea-143da861244d\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.561644 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-config\") pod \"52535b6c-a2fa-41da-aeea-143da861244d\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.561739 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-svc\") pod \"52535b6c-a2fa-41da-aeea-143da861244d\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.561780 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-nb\") pod \"52535b6c-a2fa-41da-aeea-143da861244d\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.561798 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-sb\") pod \"52535b6c-a2fa-41da-aeea-143da861244d\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.562023 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-swift-storage-0\") pod \"52535b6c-a2fa-41da-aeea-143da861244d\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.574822 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52535b6c-a2fa-41da-aeea-143da861244d-kube-api-access-8vhx8" (OuterVolumeSpecName: "kube-api-access-8vhx8") pod "52535b6c-a2fa-41da-aeea-143da861244d" (UID: "52535b6c-a2fa-41da-aeea-143da861244d"). InnerVolumeSpecName "kube-api-access-8vhx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.599084 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52535b6c-a2fa-41da-aeea-143da861244d" (UID: "52535b6c-a2fa-41da-aeea-143da861244d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.602465 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-config" (OuterVolumeSpecName: "config") pod "52535b6c-a2fa-41da-aeea-143da861244d" (UID: "52535b6c-a2fa-41da-aeea-143da861244d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.639692 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52535b6c-a2fa-41da-aeea-143da861244d" (UID: "52535b6c-a2fa-41da-aeea-143da861244d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.641171 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "52535b6c-a2fa-41da-aeea-143da861244d" (UID: "52535b6c-a2fa-41da-aeea-143da861244d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.649871 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52535b6c-a2fa-41da-aeea-143da861244d" (UID: "52535b6c-a2fa-41da-aeea-143da861244d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.655383 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.667005 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.667051 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vhx8\" (UniqueName: \"kubernetes.io/projected/52535b6c-a2fa-41da-aeea-143da861244d-kube-api-access-8vhx8\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.667064 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.667079 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.667094 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.667105 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.670164 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.711088 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" path="/var/lib/kubelet/pods/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f/volumes" Feb 21 07:05:29 crc kubenswrapper[4820]: E0221 07:05:29.881913 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52535b6c_a2fa_41da_aeea_143da861244d.slice\": RecentStats: unable to find data in memory cache]" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.991226 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" event={"ID":"52535b6c-a2fa-41da-aeea-143da861244d","Type":"ContainerDied","Data":"8a7c8e6f8a819fc83f3cf159fcee59f34fa83470772d092880bf4f279ed7b7c1"} Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.991303 4820 scope.go:117] "RemoveContainer" containerID="7c1847cdd2d5d8bbe97a6ee50c2aa4639920b7e7798a71e10c2f9806b4f4e40d" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.991427 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.997062 4820 generic.go:334] "Generic (PLEG): container finished" podID="375bfff4-76af-4f71-a665-c409feeb6f67" containerID="18dc85665c905eaff86848c97e8cbe825cac87dcc411dd90b770e67c8c997f65" exitCode=0 Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.997115 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" event={"ID":"375bfff4-76af-4f71-a665-c409feeb6f67","Type":"ContainerDied","Data":"18dc85665c905eaff86848c97e8cbe825cac87dcc411dd90b770e67c8c997f65"} Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.997141 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" event={"ID":"375bfff4-76af-4f71-a665-c409feeb6f67","Type":"ContainerStarted","Data":"0deee66ca0c914e04051643e2ef7f61bf67d60020463554eb611d4a4dbdb4fc8"} Feb 21 07:05:30 crc kubenswrapper[4820]: I0221 07:05:30.007430 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"316968d3-d62b-4a31-b157-02f4a33cd175","Type":"ContainerStarted","Data":"7ac42339ffb42ecc0717cb27e6d9608813dcb6377518f31c7fcea3928ee2ca43"} Feb 21 07:05:30 crc kubenswrapper[4820]: I0221 07:05:30.030692 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"461dc704-1698-4a81-bb65-4009ee43495d","Type":"ContainerStarted","Data":"d645e31fcb0bd8bc7a86806a34c23d7f268977e0bb1931333e7f6821ba2aef6d"} Feb 21 07:05:30 crc kubenswrapper[4820]: I0221 07:05:30.043504 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lj8d2" event={"ID":"085b95c8-2602-461b-8a08-91aff75f97a0","Type":"ContainerStarted","Data":"52db6acc38ff2a23c299765955438b0540a4c5ba1d62d6356d26d0d4454620b3"} Feb 21 07:05:30 crc kubenswrapper[4820]: I0221 07:05:30.134878 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-bh6wx"] Feb 21 07:05:30 crc kubenswrapper[4820]: I0221 07:05:30.143673 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-bh6wx"] Feb 21 07:05:31 crc kubenswrapper[4820]: I0221 07:05:31.057734 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"316968d3-d62b-4a31-b157-02f4a33cd175","Type":"ContainerStarted","Data":"41a454544d0148d2faee476a316b896e6f909f8bf8b2d3744f1b86f2fa20f98f"} Feb 21 07:05:31 crc kubenswrapper[4820]: I0221 07:05:31.063069 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"461dc704-1698-4a81-bb65-4009ee43495d","Type":"ContainerStarted","Data":"b020d2fe428151da7b2eb896196f55cc5b48b664fc8e307cd9ea0ea9a7eb0952"} Feb 21 07:05:31 crc kubenswrapper[4820]: I0221 07:05:31.074015 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" event={"ID":"375bfff4-76af-4f71-a665-c409feeb6f67","Type":"ContainerStarted","Data":"fde577041ba66e346a36a4b20611073001e0ace822b909662a854ab13a1c8173"} Feb 21 07:05:31 crc kubenswrapper[4820]: I0221 07:05:31.074296 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:31 crc kubenswrapper[4820]: I0221 07:05:31.096696 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" podStartSLOduration=4.096681355 podStartE2EDuration="4.096681355s" podCreationTimestamp="2026-02-21 07:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:31.096016227 +0000 UTC m=+1106.129100425" watchObservedRunningTime="2026-02-21 07:05:31.096681355 +0000 UTC m=+1106.129765553" Feb 21 07:05:31 crc kubenswrapper[4820]: I0221 07:05:31.708196 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52535b6c-a2fa-41da-aeea-143da861244d" path="/var/lib/kubelet/pods/52535b6c-a2fa-41da-aeea-143da861244d/volumes" Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.094891 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"316968d3-d62b-4a31-b157-02f4a33cd175","Type":"ContainerStarted","Data":"8f493bbc1bdd41dab70a0b09a276db32fe5937f4e4911f40ec588748ad330aae"} Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.095067 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-log" containerID="cri-o://41a454544d0148d2faee476a316b896e6f909f8bf8b2d3744f1b86f2fa20f98f" gracePeriod=30 Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.095107 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-httpd" containerID="cri-o://8f493bbc1bdd41dab70a0b09a276db32fe5937f4e4911f40ec588748ad330aae" gracePeriod=30 Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.100614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"461dc704-1698-4a81-bb65-4009ee43495d","Type":"ContainerStarted","Data":"45be5b555db34316ff34ee5039f0067478ff5a66ee6b2f029e0fcf1d6806fecd"} Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.100633 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-log" containerID="cri-o://b020d2fe428151da7b2eb896196f55cc5b48b664fc8e307cd9ea0ea9a7eb0952" gracePeriod=30 Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.100779 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-httpd" containerID="cri-o://45be5b555db34316ff34ee5039f0067478ff5a66ee6b2f029e0fcf1d6806fecd" gracePeriod=30 Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.152725 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.152701162 podStartE2EDuration="5.152701162s" podCreationTimestamp="2026-02-21 07:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:32.148143767 +0000 UTC m=+1107.181227965" watchObservedRunningTime="2026-02-21 07:05:32.152701162 +0000 UTC m=+1107.185785360" Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.158266 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.158227202 podStartE2EDuration="6.158227202s" podCreationTimestamp="2026-02-21 07:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:32.12076709 +0000 UTC m=+1107.153851288" watchObservedRunningTime="2026-02-21 07:05:32.158227202 +0000 UTC m=+1107.191311390" Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.112729 4820 generic.go:334] "Generic (PLEG): container finished" podID="461dc704-1698-4a81-bb65-4009ee43495d" containerID="45be5b555db34316ff34ee5039f0067478ff5a66ee6b2f029e0fcf1d6806fecd" exitCode=0 Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.112762 4820 generic.go:334] "Generic (PLEG): container finished" podID="461dc704-1698-4a81-bb65-4009ee43495d" containerID="b020d2fe428151da7b2eb896196f55cc5b48b664fc8e307cd9ea0ea9a7eb0952" exitCode=143 Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.112844 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"461dc704-1698-4a81-bb65-4009ee43495d","Type":"ContainerDied","Data":"45be5b555db34316ff34ee5039f0067478ff5a66ee6b2f029e0fcf1d6806fecd"} Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.113682 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"461dc704-1698-4a81-bb65-4009ee43495d","Type":"ContainerDied","Data":"b020d2fe428151da7b2eb896196f55cc5b48b664fc8e307cd9ea0ea9a7eb0952"} Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.122476 4820 generic.go:334] "Generic (PLEG): container finished" podID="316968d3-d62b-4a31-b157-02f4a33cd175" containerID="8f493bbc1bdd41dab70a0b09a276db32fe5937f4e4911f40ec588748ad330aae" exitCode=0 Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.122518 4820 generic.go:334] "Generic (PLEG): container finished" podID="316968d3-d62b-4a31-b157-02f4a33cd175" containerID="41a454544d0148d2faee476a316b896e6f909f8bf8b2d3744f1b86f2fa20f98f" exitCode=143 Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.122552 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"316968d3-d62b-4a31-b157-02f4a33cd175","Type":"ContainerDied","Data":"8f493bbc1bdd41dab70a0b09a276db32fe5937f4e4911f40ec588748ad330aae"} Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.122586 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"316968d3-d62b-4a31-b157-02f4a33cd175","Type":"ContainerDied","Data":"41a454544d0148d2faee476a316b896e6f909f8bf8b2d3744f1b86f2fa20f98f"} Feb 21 07:05:34 crc kubenswrapper[4820]: I0221 07:05:34.150264 4820 generic.go:334] "Generic (PLEG): container finished" podID="aece5dfd-5954-404f-b713-4fe36b649ce9" containerID="1f8b1fb2f69da036c688f31fb3679ae1f19a1bae47b10780c72a6f4de62dcb8b" exitCode=0 Feb 21 07:05:34 crc kubenswrapper[4820]: I0221 07:05:34.150329 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clkkr" event={"ID":"aece5dfd-5954-404f-b713-4fe36b649ce9","Type":"ContainerDied","Data":"1f8b1fb2f69da036c688f31fb3679ae1f19a1bae47b10780c72a6f4de62dcb8b"} Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.547617 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.714615 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-config-data\") pod \"aece5dfd-5954-404f-b713-4fe36b649ce9\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.714694 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-scripts\") pod \"aece5dfd-5954-404f-b713-4fe36b649ce9\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.714801 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-fernet-keys\") pod \"aece5dfd-5954-404f-b713-4fe36b649ce9\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.714883 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4ts2\" (UniqueName: \"kubernetes.io/projected/aece5dfd-5954-404f-b713-4fe36b649ce9-kube-api-access-z4ts2\") pod \"aece5dfd-5954-404f-b713-4fe36b649ce9\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.714969 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-combined-ca-bundle\") pod \"aece5dfd-5954-404f-b713-4fe36b649ce9\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.715008 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-credential-keys\") pod \"aece5dfd-5954-404f-b713-4fe36b649ce9\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.720411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "aece5dfd-5954-404f-b713-4fe36b649ce9" (UID: "aece5dfd-5954-404f-b713-4fe36b649ce9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.720751 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-scripts" (OuterVolumeSpecName: "scripts") pod "aece5dfd-5954-404f-b713-4fe36b649ce9" (UID: "aece5dfd-5954-404f-b713-4fe36b649ce9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.721293 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aece5dfd-5954-404f-b713-4fe36b649ce9-kube-api-access-z4ts2" (OuterVolumeSpecName: "kube-api-access-z4ts2") pod "aece5dfd-5954-404f-b713-4fe36b649ce9" (UID: "aece5dfd-5954-404f-b713-4fe36b649ce9"). InnerVolumeSpecName "kube-api-access-z4ts2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.728495 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "aece5dfd-5954-404f-b713-4fe36b649ce9" (UID: "aece5dfd-5954-404f-b713-4fe36b649ce9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.743030 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aece5dfd-5954-404f-b713-4fe36b649ce9" (UID: "aece5dfd-5954-404f-b713-4fe36b649ce9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.744590 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-config-data" (OuterVolumeSpecName: "config-data") pod "aece5dfd-5954-404f-b713-4fe36b649ce9" (UID: "aece5dfd-5954-404f-b713-4fe36b649ce9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.817339 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.817765 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.817826 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.817887 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4ts2\" (UniqueName: \"kubernetes.io/projected/aece5dfd-5954-404f-b713-4fe36b649ce9-kube-api-access-z4ts2\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.817943 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.817997 4820 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.112479 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.184147 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-mhcgl"] Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.184411 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="dnsmasq-dns" containerID="cri-o://768c0701e8f8f7783ec7add20fa58d3a392d65a4a41a9f5f3a7c5d275fa45505" gracePeriod=10 Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.212908 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clkkr" event={"ID":"aece5dfd-5954-404f-b713-4fe36b649ce9","Type":"ContainerDied","Data":"0fb67de2b1978c0e372a77b846d58a92de58425b3a806773cc6a315c6efe11bf"} Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.212956 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fb67de2b1978c0e372a77b846d58a92de58425b3a806773cc6a315c6efe11bf" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.213014 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.644251 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-clkkr"] Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.664702 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-clkkr"] Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.731078 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-s76l5"] Feb 21 07:05:38 crc kubenswrapper[4820]: E0221 07:05:38.731805 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52535b6c-a2fa-41da-aeea-143da861244d" containerName="init" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.731826 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="52535b6c-a2fa-41da-aeea-143da861244d" containerName="init" Feb 21 07:05:38 crc kubenswrapper[4820]: E0221 07:05:38.731841 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aece5dfd-5954-404f-b713-4fe36b649ce9" containerName="keystone-bootstrap" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.731849 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aece5dfd-5954-404f-b713-4fe36b649ce9" containerName="keystone-bootstrap" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.732065 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="aece5dfd-5954-404f-b713-4fe36b649ce9" containerName="keystone-bootstrap" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.732092 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="52535b6c-a2fa-41da-aeea-143da861244d" containerName="init" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.732875 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.742518 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w79dl" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.742871 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.742954 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-scripts\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743152 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743188 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x55sl\" (UniqueName: \"kubernetes.io/projected/a9866838-084f-4340-b72d-5dba3461661e-kube-api-access-x55sl\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743288 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743290 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-combined-ca-bundle\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743337 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-config-data\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743363 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-fernet-keys\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743427 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-credential-keys\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743515 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.768175 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s76l5"] Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.844623 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-credential-keys\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.844692 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-scripts\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.844737 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x55sl\" (UniqueName: \"kubernetes.io/projected/a9866838-084f-4340-b72d-5dba3461661e-kube-api-access-x55sl\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.844780 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-combined-ca-bundle\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.844814 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-config-data\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.844831 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-fernet-keys\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.852923 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-scripts\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.853071 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-fernet-keys\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.853619 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-config-data\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.854068 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-combined-ca-bundle\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.854969 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-credential-keys\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.869726 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x55sl\" (UniqueName: \"kubernetes.io/projected/a9866838-084f-4340-b72d-5dba3461661e-kube-api-access-x55sl\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:39 crc kubenswrapper[4820]: I0221 07:05:39.067430 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:39 crc kubenswrapper[4820]: I0221 07:05:39.223733 4820 generic.go:334] "Generic (PLEG): container finished" podID="97c27e55-f0a0-4253-b573-21c027992fe7" containerID="768c0701e8f8f7783ec7add20fa58d3a392d65a4a41a9f5f3a7c5d275fa45505" exitCode=0 Feb 21 07:05:39 crc kubenswrapper[4820]: I0221 07:05:39.223780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" event={"ID":"97c27e55-f0a0-4253-b573-21c027992fe7","Type":"ContainerDied","Data":"768c0701e8f8f7783ec7add20fa58d3a392d65a4a41a9f5f3a7c5d275fa45505"} Feb 21 07:05:39 crc kubenswrapper[4820]: I0221 07:05:39.706545 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aece5dfd-5954-404f-b713-4fe36b649ce9" path="/var/lib/kubelet/pods/aece5dfd-5954-404f-b713-4fe36b649ce9/volumes" Feb 21 07:05:40 crc kubenswrapper[4820]: I0221 07:05:40.082366 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Feb 21 07:05:45 crc kubenswrapper[4820]: I0221 07:05:45.082801 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Feb 21 07:05:47 crc kubenswrapper[4820]: E0221 07:05:47.019919 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5ac8ede62671a3b3695cf29bd3a6f124f27c93d1730f9030cc3daa05034d4af4" Feb 21 07:05:47 crc kubenswrapper[4820]: E0221 07:05:47.020352 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5ac8ede62671a3b3695cf29bd3a6f124f27c93d1730f9030cc3daa05034d4af4,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfh55dh557h5f5h5b9h654h5ddhc4hbfh5bch75h645hc9h64bh68h56dh5d6h674h55ch98h648h7dh7fh85h68h556h5bh674h55h549h69hd9q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vr9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a3cce54d-5f2a-4e51-864d-03e55b50d698): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:05:49 crc kubenswrapper[4820]: E0221 07:05:49.311654 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" Feb 21 07:05:49 crc kubenswrapper[4820]: E0221 07:05:49.312087 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcn9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-wdvf7_openstack(e2b995bf-93f1-4f28-a1a6-0d13ac9ca744): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:05:49 crc kubenswrapper[4820]: E0221 07:05:49.313308 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-wdvf7" podUID="e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" Feb 21 07:05:50 crc kubenswrapper[4820]: E0221 07:05:50.409961 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36\\\"\"" pod="openstack/placement-db-sync-wdvf7" podUID="e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" Feb 21 07:05:50 crc kubenswrapper[4820]: E0221 07:05:50.713763 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" Feb 21 07:05:50 crc kubenswrapper[4820]: E0221 07:05:50.714145 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2l4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-vfn4b_openstack(b400c916-2ba9-4d7e-b9f5-6044605f279c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:05:50 crc kubenswrapper[4820]: E0221 07:05:50.715304 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-vfn4b" podUID="b400c916-2ba9-4d7e-b9f5-6044605f279c" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.874571 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.886513 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.900858 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.978807 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-scripts\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.978892 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.978953 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6r6g\" (UniqueName: \"kubernetes.io/projected/316968d3-d62b-4a31-b157-02f4a33cd175-kube-api-access-j6r6g\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.978996 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-logs\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.979090 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-combined-ca-bundle\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.979128 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-config-data\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.979183 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-httpd-run\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.979210 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-public-tls-certs\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.981014 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.981041 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-logs" (OuterVolumeSpecName: "logs") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.984053 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316968d3-d62b-4a31-b157-02f4a33cd175-kube-api-access-j6r6g" (OuterVolumeSpecName: "kube-api-access-j6r6g") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "kube-api-access-j6r6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.988852 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.999652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-scripts" (OuterVolumeSpecName: "scripts") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.013753 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.031352 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-config-data" (OuterVolumeSpecName: "config-data") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.056788 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.080837 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081706 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-internal-tls-certs\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081742 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-nb\") pod \"97c27e55-f0a0-4253-b573-21c027992fe7\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081766 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-combined-ca-bundle\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081817 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-dns-svc\") pod \"97c27e55-f0a0-4253-b573-21c027992fe7\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081852 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-httpd-run\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081898 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf6x6\" (UniqueName: \"kubernetes.io/projected/97c27e55-f0a0-4253-b573-21c027992fe7-kube-api-access-sf6x6\") pod \"97c27e55-f0a0-4253-b573-21c027992fe7\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081943 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-scripts\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081995 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-config-data\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082080 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-logs\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082106 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-config\") pod \"97c27e55-f0a0-4253-b573-21c027992fe7\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082152 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-sb\") pod \"97c27e55-f0a0-4253-b573-21c027992fe7\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082177 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zxq6\" (UniqueName: \"kubernetes.io/projected/461dc704-1698-4a81-bb65-4009ee43495d-kube-api-access-4zxq6\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082702 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082734 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082748 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6r6g\" (UniqueName: \"kubernetes.io/projected/316968d3-d62b-4a31-b157-02f4a33cd175-kube-api-access-j6r6g\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082983 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.084033 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-logs" (OuterVolumeSpecName: "logs") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.084418 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.084455 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.084470 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.084480 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.084499 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.085765 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.089853 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s76l5"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.090742 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-scripts" (OuterVolumeSpecName: "scripts") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.091209 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97c27e55-f0a0-4253-b573-21c027992fe7-kube-api-access-sf6x6" (OuterVolumeSpecName: "kube-api-access-sf6x6") pod "97c27e55-f0a0-4253-b573-21c027992fe7" (UID: "97c27e55-f0a0-4253-b573-21c027992fe7"). InnerVolumeSpecName "kube-api-access-sf6x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.093424 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/461dc704-1698-4a81-bb65-4009ee43495d-kube-api-access-4zxq6" (OuterVolumeSpecName: "kube-api-access-4zxq6") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "kube-api-access-4zxq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.112974 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.123688 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.133953 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "97c27e55-f0a0-4253-b573-21c027992fe7" (UID: "97c27e55-f0a0-4253-b573-21c027992fe7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.136046 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "97c27e55-f0a0-4253-b573-21c027992fe7" (UID: "97c27e55-f0a0-4253-b573-21c027992fe7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.136908 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-config" (OuterVolumeSpecName: "config") pod "97c27e55-f0a0-4253-b573-21c027992fe7" (UID: "97c27e55-f0a0-4253-b573-21c027992fe7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.141148 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "97c27e55-f0a0-4253-b573-21c027992fe7" (UID: "97c27e55-f0a0-4253-b573-21c027992fe7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.143997 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.150435 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-config-data" (OuterVolumeSpecName: "config-data") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186142 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186182 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186196 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186207 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186216 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186227 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186260 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf6x6\" (UniqueName: \"kubernetes.io/projected/97c27e55-f0a0-4253-b573-21c027992fe7-kube-api-access-sf6x6\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186273 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186285 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186294 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186307 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186318 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186331 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zxq6\" (UniqueName: \"kubernetes.io/projected/461dc704-1698-4a81-bb65-4009ee43495d-kube-api-access-4zxq6\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186375 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.203459 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.287592 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.415519 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" event={"ID":"97c27e55-f0a0-4253-b573-21c027992fe7","Type":"ContainerDied","Data":"86f086e1554176bb192e9a0f40187bc917685a90d4baa1f41b7eedcf9aeba502"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.415878 4820 scope.go:117] "RemoveContainer" containerID="768c0701e8f8f7783ec7add20fa58d3a392d65a4a41a9f5f3a7c5d275fa45505" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.415567 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.418308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s76l5" event={"ID":"a9866838-084f-4340-b72d-5dba3461661e","Type":"ContainerStarted","Data":"550c85937cab1a43ffca5a3e6f730da87ca2ca354c9ca4640bf21a06db239cf3"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.418371 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s76l5" event={"ID":"a9866838-084f-4340-b72d-5dba3461661e","Type":"ContainerStarted","Data":"6648d21ac045e633a749ffcd89f6a738d8e2e691df6270b6eb372db55cf2ebb2"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.420774 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"316968d3-d62b-4a31-b157-02f4a33cd175","Type":"ContainerDied","Data":"7ac42339ffb42ecc0717cb27e6d9608813dcb6377518f31c7fcea3928ee2ca43"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.420820 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.425024 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"461dc704-1698-4a81-bb65-4009ee43495d","Type":"ContainerDied","Data":"d645e31fcb0bd8bc7a86806a34c23d7f268977e0bb1931333e7f6821ba2aef6d"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.425086 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.431749 4820 generic.go:334] "Generic (PLEG): container finished" podID="085b95c8-2602-461b-8a08-91aff75f97a0" containerID="52db6acc38ff2a23c299765955438b0540a4c5ba1d62d6356d26d0d4454620b3" exitCode=0 Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.431805 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lj8d2" event={"ID":"085b95c8-2602-461b-8a08-91aff75f97a0","Type":"ContainerDied","Data":"52db6acc38ff2a23c299765955438b0540a4c5ba1d62d6356d26d0d4454620b3"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.433750 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerStarted","Data":"9bdbaee398862dd842b50a8bb04bbb6638f1a2775f7a43c604f1191103118805"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.435385 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-smnkd" event={"ID":"f9b51414-aa8f-49ad-b662-b3c44eb0bc62","Type":"ContainerStarted","Data":"2906e8fbc9b8391ea1b9f7b50ccdd20d9a364edc7038390a746c5010002fe445"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.446937 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-s76l5" podStartSLOduration=13.446918079 podStartE2EDuration="13.446918079s" podCreationTimestamp="2026-02-21 07:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:51.440980616 +0000 UTC m=+1126.474064814" watchObservedRunningTime="2026-02-21 07:05:51.446918079 +0000 UTC m=+1126.480002277" Feb 21 07:05:51 crc kubenswrapper[4820]: E0221 07:05:51.452690 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b\\\"\"" pod="openstack/cinder-db-sync-vfn4b" podUID="b400c916-2ba9-4d7e-b9f5-6044605f279c" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.453031 4820 scope.go:117] "RemoveContainer" containerID="b23b69dde5d8d2db7290e326e8c103f21a46fecab91f2fe5987461b750aca0cf" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.477475 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-mhcgl"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.484756 4820 scope.go:117] "RemoveContainer" containerID="8f493bbc1bdd41dab70a0b09a276db32fe5937f4e4911f40ec588748ad330aae" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.487921 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-mhcgl"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.500156 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-smnkd" podStartSLOduration=2.312935826 podStartE2EDuration="24.500138252s" podCreationTimestamp="2026-02-21 07:05:27 +0000 UTC" firstStartedPulling="2026-02-21 07:05:28.50230363 +0000 UTC m=+1103.535387828" lastFinishedPulling="2026-02-21 07:05:50.689506056 +0000 UTC m=+1125.722590254" observedRunningTime="2026-02-21 07:05:51.492516473 +0000 UTC m=+1126.525600681" watchObservedRunningTime="2026-02-21 07:05:51.500138252 +0000 UTC m=+1126.533222450" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.512450 4820 scope.go:117] "RemoveContainer" containerID="41a454544d0148d2faee476a316b896e6f909f8bf8b2d3744f1b86f2fa20f98f" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.542078 4820 scope.go:117] "RemoveContainer" containerID="45be5b555db34316ff34ee5039f0067478ff5a66ee6b2f029e0fcf1d6806fecd" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.546272 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.570803 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.579749 4820 scope.go:117] "RemoveContainer" containerID="b020d2fe428151da7b2eb896196f55cc5b48b664fc8e307cd9ea0ea9a7eb0952" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.590678 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: E0221 07:05:51.591062 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="init" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591082 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="init" Feb 21 07:05:51 crc kubenswrapper[4820]: E0221 07:05:51.591092 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-log" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591102 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-log" Feb 21 07:05:51 crc kubenswrapper[4820]: E0221 07:05:51.591122 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-httpd" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591128 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-httpd" Feb 21 07:05:51 crc kubenswrapper[4820]: E0221 07:05:51.591139 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-log" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591145 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-log" Feb 21 07:05:51 crc kubenswrapper[4820]: E0221 07:05:51.591166 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="dnsmasq-dns" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591173 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="dnsmasq-dns" Feb 21 07:05:51 crc kubenswrapper[4820]: E0221 07:05:51.591184 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-httpd" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591189 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-httpd" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591351 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-log" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591369 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="dnsmasq-dns" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591382 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-httpd" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591393 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-httpd" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591401 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-log" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.592283 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.606263 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.607005 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.607416 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.608313 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.608505 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bl7bk" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.619842 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.628113 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.635093 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.636657 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.638864 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.639145 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.641804 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.697001 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.697211 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.697450 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-logs\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.697534 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.697667 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/5c400cc2-a2a1-4204-8300-2b2420ab825e-kube-api-access-6kv8z\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.697802 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.697918 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.699548 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.713287 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" path="/var/lib/kubelet/pods/316968d3-d62b-4a31-b157-02f4a33cd175/volumes" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.714178 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="461dc704-1698-4a81-bb65-4009ee43495d" path="/var/lib/kubelet/pods/461dc704-1698-4a81-bb65-4009ee43495d/volumes" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.716986 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" path="/var/lib/kubelet/pods/97c27e55-f0a0-4253-b573-21c027992fe7/volumes" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.801786 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.801847 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.801897 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.801935 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.801962 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.801998 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802032 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmd5h\" (UniqueName: \"kubernetes.io/projected/1f5a553e-c548-455a-83e2-87f8f71f3067-kube-api-access-zmd5h\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802088 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802112 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-logs\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802137 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802158 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802253 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/5c400cc2-a2a1-4204-8300-2b2420ab825e-kube-api-access-6kv8z\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802287 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802320 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802380 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802410 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.803029 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.803045 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-logs\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.803298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.808358 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.809032 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.809816 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.812038 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.822657 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/5c400cc2-a2a1-4204-8300-2b2420ab825e-kube-api-access-6kv8z\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.833836 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.903719 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.904659 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.904786 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmd5h\" (UniqueName: \"kubernetes.io/projected/1f5a553e-c548-455a-83e2-87f8f71f3067-kube-api-access-zmd5h\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.904867 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.904893 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.904979 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.905067 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.905095 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.905374 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.909956 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.910836 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.914792 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.915216 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.915826 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.921891 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.928028 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.935514 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.944703 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmd5h\" (UniqueName: \"kubernetes.io/projected/1f5a553e-c548-455a-83e2-87f8f71f3067-kube-api-access-zmd5h\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.959024 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:52 crc kubenswrapper[4820]: I0221 07:05:52.501080 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:52 crc kubenswrapper[4820]: I0221 07:05:52.608076 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:52 crc kubenswrapper[4820]: W0221 07:05:52.633742 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f5a553e_c548_455a_83e2_87f8f71f3067.slice/crio-52abf5a2098d07a4a0de7b8077842d862d702555a35f0737cfb50e48aa1ad9fd WatchSource:0}: Error finding container 52abf5a2098d07a4a0de7b8077842d862d702555a35f0737cfb50e48aa1ad9fd: Status 404 returned error can't find the container with id 52abf5a2098d07a4a0de7b8077842d862d702555a35f0737cfb50e48aa1ad9fd Feb 21 07:05:52 crc kubenswrapper[4820]: I0221 07:05:52.909766 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.038445 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-config\") pod \"085b95c8-2602-461b-8a08-91aff75f97a0\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.038588 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjnzq\" (UniqueName: \"kubernetes.io/projected/085b95c8-2602-461b-8a08-91aff75f97a0-kube-api-access-vjnzq\") pod \"085b95c8-2602-461b-8a08-91aff75f97a0\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.038627 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-combined-ca-bundle\") pod \"085b95c8-2602-461b-8a08-91aff75f97a0\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.044402 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085b95c8-2602-461b-8a08-91aff75f97a0-kube-api-access-vjnzq" (OuterVolumeSpecName: "kube-api-access-vjnzq") pod "085b95c8-2602-461b-8a08-91aff75f97a0" (UID: "085b95c8-2602-461b-8a08-91aff75f97a0"). InnerVolumeSpecName "kube-api-access-vjnzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.097081 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-config" (OuterVolumeSpecName: "config") pod "085b95c8-2602-461b-8a08-91aff75f97a0" (UID: "085b95c8-2602-461b-8a08-91aff75f97a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.138430 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "085b95c8-2602-461b-8a08-91aff75f97a0" (UID: "085b95c8-2602-461b-8a08-91aff75f97a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.145284 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjnzq\" (UniqueName: \"kubernetes.io/projected/085b95c8-2602-461b-8a08-91aff75f97a0-kube-api-access-vjnzq\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.145385 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.145467 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.469486 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f5a553e-c548-455a-83e2-87f8f71f3067","Type":"ContainerStarted","Data":"52abf5a2098d07a4a0de7b8077842d862d702555a35f0737cfb50e48aa1ad9fd"} Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.471010 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lj8d2" event={"ID":"085b95c8-2602-461b-8a08-91aff75f97a0","Type":"ContainerDied","Data":"ba77ea1a8e56334ddcc0c11ab2474c1a360646ae3121c5554dc3dabd168e0eca"} Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.471030 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba77ea1a8e56334ddcc0c11ab2474c1a360646ae3121c5554dc3dabd168e0eca" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.471084 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.482262 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c400cc2-a2a1-4204-8300-2b2420ab825e","Type":"ContainerStarted","Data":"3e1ff2dd763154f63b65dd4be9fe5f5bcd513f4150395e54156c56ea74a4fb48"} Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.482302 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c400cc2-a2a1-4204-8300-2b2420ab825e","Type":"ContainerStarted","Data":"55817b22512b4f79b05a91fa0314cc7452c7e5542175c8a9531d82ddc3a3f526"} Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.775495 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-vznz8"] Feb 21 07:05:53 crc kubenswrapper[4820]: E0221 07:05:53.775931 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085b95c8-2602-461b-8a08-91aff75f97a0" containerName="neutron-db-sync" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.775949 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="085b95c8-2602-461b-8a08-91aff75f97a0" containerName="neutron-db-sync" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.776184 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="085b95c8-2602-461b-8a08-91aff75f97a0" containerName="neutron-db-sync" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.779272 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.801403 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-vznz8"] Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.870425 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-nb\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.870472 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-swift-storage-0\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.870799 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-config\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.870827 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-sb\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.870843 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mc4g\" (UniqueName: \"kubernetes.io/projected/ca0fc508-843f-44b4-96a4-83072d14662c-kube-api-access-8mc4g\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.873737 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-svc\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.967463 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7777947948-b8bjv"] Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.969296 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.974845 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.974890 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-swift-storage-0\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.974988 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-config\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.975018 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-sb\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.975038 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mc4g\" (UniqueName: \"kubernetes.io/projected/ca0fc508-843f-44b4-96a4-83072d14662c-kube-api-access-8mc4g\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.975145 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-svc\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.975180 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-nb\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.975718 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.975978 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.976213 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-nb\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.976540 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-swift-storage-0\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.976637 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qfdgf" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.976880 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-config\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.978071 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-svc\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.978360 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-sb\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.990333 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7777947948-b8bjv"] Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.004993 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mc4g\" (UniqueName: \"kubernetes.io/projected/ca0fc508-843f-44b4-96a4-83072d14662c-kube-api-access-8mc4g\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.077092 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-httpd-config\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.077338 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-ovndb-tls-certs\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.077376 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjdn4\" (UniqueName: \"kubernetes.io/projected/5cfa00dc-af93-49c8-ac1b-67cea9851389-kube-api-access-zjdn4\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.077422 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-combined-ca-bundle\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.077455 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-config\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.119892 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.179468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-combined-ca-bundle\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.179543 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-config\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.179607 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-httpd-config\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.179686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-ovndb-tls-certs\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.179725 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjdn4\" (UniqueName: \"kubernetes.io/projected/5cfa00dc-af93-49c8-ac1b-67cea9851389-kube-api-access-zjdn4\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.183978 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-httpd-config\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.185600 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-ovndb-tls-certs\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.199700 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-config\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.200636 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjdn4\" (UniqueName: \"kubernetes.io/projected/5cfa00dc-af93-49c8-ac1b-67cea9851389-kube-api-access-zjdn4\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.208435 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-combined-ca-bundle\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.310330 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.516115 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c400cc2-a2a1-4204-8300-2b2420ab825e","Type":"ContainerStarted","Data":"94f8cea32bfbe2dcb3dc478f2ac9ab5b9c23f557b5defcc5e3d635872a87fe5e"} Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.524566 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f5a553e-c548-455a-83e2-87f8f71f3067","Type":"ContainerStarted","Data":"f1065ea92f9064f45c2733a25acd9f61b2299b2724994ced2d00c91a6cdebca4"} Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.757479 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-vznz8"] Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.005172 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7777947948-b8bjv"] Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.084026 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.536039 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f5a553e-c548-455a-83e2-87f8f71f3067","Type":"ContainerStarted","Data":"f07c17454301badcf8ab4771e95e8220dd709e96e43e5e64fa93a0170de14464"} Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.539578 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7777947948-b8bjv" event={"ID":"5cfa00dc-af93-49c8-ac1b-67cea9851389","Type":"ContainerStarted","Data":"47540e3342615d58fd4f14384685d36d1d488276912b091d77e02f8d31604449"} Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.539627 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7777947948-b8bjv" event={"ID":"5cfa00dc-af93-49c8-ac1b-67cea9851389","Type":"ContainerStarted","Data":"336d7e018fc3ba9ca31cabbde804230c2c9a2a352511b16336cc0f2ad7e63c2b"} Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.539645 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7777947948-b8bjv" event={"ID":"5cfa00dc-af93-49c8-ac1b-67cea9851389","Type":"ContainerStarted","Data":"3e28ba467d144d224a1ff3d02bb67eaf401e7d86630f2424dc064e42e81ffa60"} Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.541551 4820 generic.go:334] "Generic (PLEG): container finished" podID="ca0fc508-843f-44b4-96a4-83072d14662c" containerID="d3650ad3fcb47b7036345aad44f6f82c057e9abcdc67b5b3bbf796f227d25110" exitCode=0 Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.542410 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" event={"ID":"ca0fc508-843f-44b4-96a4-83072d14662c","Type":"ContainerDied","Data":"d3650ad3fcb47b7036345aad44f6f82c057e9abcdc67b5b3bbf796f227d25110"} Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.542445 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" event={"ID":"ca0fc508-843f-44b4-96a4-83072d14662c","Type":"ContainerStarted","Data":"605148f1eaf42ee14c34a0ed19827ae005b65705aabb872c28cb0ecdd7dd5d16"} Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.589143 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.589125179 podStartE2EDuration="4.589125179s" podCreationTimestamp="2026-02-21 07:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:55.557695001 +0000 UTC m=+1130.590779209" watchObservedRunningTime="2026-02-21 07:05:55.589125179 +0000 UTC m=+1130.622209387" Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.645225 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.64520382 podStartE2EDuration="4.64520382s" podCreationTimestamp="2026-02-21 07:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:55.637247413 +0000 UTC m=+1130.670331641" watchObservedRunningTime="2026-02-21 07:05:55.64520382 +0000 UTC m=+1130.678288018" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.555055 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" event={"ID":"ca0fc508-843f-44b4-96a4-83072d14662c","Type":"ContainerStarted","Data":"6fd5a1ea5010e663397764292f55e9aa3ae60e4cd8178288a9eb433b1fc97861"} Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.555447 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.558659 4820 generic.go:334] "Generic (PLEG): container finished" podID="a9866838-084f-4340-b72d-5dba3461661e" containerID="550c85937cab1a43ffca5a3e6f730da87ca2ca354c9ca4640bf21a06db239cf3" exitCode=0 Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.558712 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s76l5" event={"ID":"a9866838-084f-4340-b72d-5dba3461661e","Type":"ContainerDied","Data":"550c85937cab1a43ffca5a3e6f730da87ca2ca354c9ca4640bf21a06db239cf3"} Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.561572 4820 generic.go:334] "Generic (PLEG): container finished" podID="f9b51414-aa8f-49ad-b662-b3c44eb0bc62" containerID="2906e8fbc9b8391ea1b9f7b50ccdd20d9a364edc7038390a746c5010002fe445" exitCode=0 Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.562546 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-smnkd" event={"ID":"f9b51414-aa8f-49ad-b662-b3c44eb0bc62","Type":"ContainerDied","Data":"2906e8fbc9b8391ea1b9f7b50ccdd20d9a364edc7038390a746c5010002fe445"} Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.563146 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.582201 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-85dd5db455-fl7mt"] Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.584147 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.587698 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.588032 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.602372 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" podStartSLOduration=3.6023534980000003 podStartE2EDuration="3.602353498s" podCreationTimestamp="2026-02-21 07:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:56.58303497 +0000 UTC m=+1131.616119168" watchObservedRunningTime="2026-02-21 07:05:56.602353498 +0000 UTC m=+1131.635437696" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.622418 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85dd5db455-fl7mt"] Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.632508 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7777947948-b8bjv" podStartSLOduration=3.6324865600000003 podStartE2EDuration="3.63248656s" podCreationTimestamp="2026-02-21 07:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:56.621865941 +0000 UTC m=+1131.654950159" watchObservedRunningTime="2026-02-21 07:05:56.63248656 +0000 UTC m=+1131.665570758" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.752662 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwzcm\" (UniqueName: \"kubernetes.io/projected/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-kube-api-access-hwzcm\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.752821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-httpd-config\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.753676 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-config\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.753725 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-combined-ca-bundle\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.753780 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-internal-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.753843 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-ovndb-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.753919 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-public-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.855810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwzcm\" (UniqueName: \"kubernetes.io/projected/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-kube-api-access-hwzcm\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.855876 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-httpd-config\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.856012 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-config\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.856040 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-combined-ca-bundle\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.856616 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-internal-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.856904 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-ovndb-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.857314 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-public-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.862298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-httpd-config\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.862897 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-internal-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.863755 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-ovndb-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.872465 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwzcm\" (UniqueName: \"kubernetes.io/projected/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-kube-api-access-hwzcm\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.873144 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-combined-ca-bundle\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.874006 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-config\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.882047 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-public-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.902746 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.415369 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.543035 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.588818 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-db-sync-config-data\") pod \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.589105 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svdtb\" (UniqueName: \"kubernetes.io/projected/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-kube-api-access-svdtb\") pod \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.589279 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-combined-ca-bundle\") pod \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.598077 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f9b51414-aa8f-49ad-b662-b3c44eb0bc62" (UID: "f9b51414-aa8f-49ad-b662-b3c44eb0bc62"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.600365 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-kube-api-access-svdtb" (OuterVolumeSpecName: "kube-api-access-svdtb") pod "f9b51414-aa8f-49ad-b662-b3c44eb0bc62" (UID: "f9b51414-aa8f-49ad-b662-b3c44eb0bc62"). InnerVolumeSpecName "kube-api-access-svdtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.600543 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.600848 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-smnkd" event={"ID":"f9b51414-aa8f-49ad-b662-b3c44eb0bc62","Type":"ContainerDied","Data":"17a0db325762105ee3f17079844c6e2a58dff4258e4ae6c4099739f2cd6e0a2f"} Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.600870 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17a0db325762105ee3f17079844c6e2a58dff4258e4ae6c4099739f2cd6e0a2f" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.611734 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s76l5" event={"ID":"a9866838-084f-4340-b72d-5dba3461661e","Type":"ContainerDied","Data":"6648d21ac045e633a749ffcd89f6a738d8e2e691df6270b6eb372db55cf2ebb2"} Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.611773 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6648d21ac045e633a749ffcd89f6a738d8e2e691df6270b6eb372db55cf2ebb2" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.611863 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.623626 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9b51414-aa8f-49ad-b662-b3c44eb0bc62" (UID: "f9b51414-aa8f-49ad-b662-b3c44eb0bc62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691247 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-fernet-keys\") pod \"a9866838-084f-4340-b72d-5dba3461661e\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691332 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x55sl\" (UniqueName: \"kubernetes.io/projected/a9866838-084f-4340-b72d-5dba3461661e-kube-api-access-x55sl\") pod \"a9866838-084f-4340-b72d-5dba3461661e\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691357 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-scripts\") pod \"a9866838-084f-4340-b72d-5dba3461661e\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691395 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-credential-keys\") pod \"a9866838-084f-4340-b72d-5dba3461661e\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691446 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-config-data\") pod \"a9866838-084f-4340-b72d-5dba3461661e\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691540 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-combined-ca-bundle\") pod \"a9866838-084f-4340-b72d-5dba3461661e\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691978 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691998 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.692008 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svdtb\" (UniqueName: \"kubernetes.io/projected/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-kube-api-access-svdtb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.696404 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a9866838-084f-4340-b72d-5dba3461661e" (UID: "a9866838-084f-4340-b72d-5dba3461661e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.699150 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9866838-084f-4340-b72d-5dba3461661e-kube-api-access-x55sl" (OuterVolumeSpecName: "kube-api-access-x55sl") pod "a9866838-084f-4340-b72d-5dba3461661e" (UID: "a9866838-084f-4340-b72d-5dba3461661e"). InnerVolumeSpecName "kube-api-access-x55sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.702927 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-scripts" (OuterVolumeSpecName: "scripts") pod "a9866838-084f-4340-b72d-5dba3461661e" (UID: "a9866838-084f-4340-b72d-5dba3461661e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.703066 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a9866838-084f-4340-b72d-5dba3461661e" (UID: "a9866838-084f-4340-b72d-5dba3461661e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.744363 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9866838-084f-4340-b72d-5dba3461661e" (UID: "a9866838-084f-4340-b72d-5dba3461661e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.749193 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-config-data" (OuterVolumeSpecName: "config-data") pod "a9866838-084f-4340-b72d-5dba3461661e" (UID: "a9866838-084f-4340-b72d-5dba3461661e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.768661 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-665c5b9dff-g2t96"] Feb 21 07:05:58 crc kubenswrapper[4820]: E0221 07:05:58.769080 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b51414-aa8f-49ad-b662-b3c44eb0bc62" containerName="barbican-db-sync" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.769101 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b51414-aa8f-49ad-b662-b3c44eb0bc62" containerName="barbican-db-sync" Feb 21 07:05:58 crc kubenswrapper[4820]: E0221 07:05:58.769137 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9866838-084f-4340-b72d-5dba3461661e" containerName="keystone-bootstrap" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.769145 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9866838-084f-4340-b72d-5dba3461661e" containerName="keystone-bootstrap" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.774531 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b51414-aa8f-49ad-b662-b3c44eb0bc62" containerName="barbican-db-sync" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.774596 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9866838-084f-4340-b72d-5dba3461661e" containerName="keystone-bootstrap" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.775402 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.777788 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.781125 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.793743 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.793778 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.793788 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x55sl\" (UniqueName: \"kubernetes.io/projected/a9866838-084f-4340-b72d-5dba3461661e-kube-api-access-x55sl\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.793798 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.793808 4820 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.793817 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.796020 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-665c5b9dff-g2t96"] Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.895825 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-config-data\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.895914 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-combined-ca-bundle\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.895958 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-credential-keys\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.895994 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-internal-tls-certs\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.896039 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-scripts\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.896093 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-fernet-keys\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.896148 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-public-tls-certs\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.896171 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzm7j\" (UniqueName: \"kubernetes.io/projected/16ebfdb2-72a8-40c6-b0ed-012f138025b2-kube-api-access-gzm7j\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.977740 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85dd5db455-fl7mt"] Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.997956 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzm7j\" (UniqueName: \"kubernetes.io/projected/16ebfdb2-72a8-40c6-b0ed-012f138025b2-kube-api-access-gzm7j\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.998059 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-config-data\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.998089 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-combined-ca-bundle\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.998123 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-credential-keys\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.998175 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-internal-tls-certs\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.998199 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-scripts\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.998290 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-fernet-keys\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.998351 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-public-tls-certs\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.003016 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-credential-keys\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.004677 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-scripts\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.006143 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-combined-ca-bundle\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.007716 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-public-tls-certs\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.007789 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-fernet-keys\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.010303 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-config-data\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.017751 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-internal-tls-certs\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.039081 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzm7j\" (UniqueName: \"kubernetes.io/projected/16ebfdb2-72a8-40c6-b0ed-012f138025b2-kube-api-access-gzm7j\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.068358 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-758b5755fc-2m84q"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.069726 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.076968 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zjnng" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.077167 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.077688 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.090834 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.091022 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-758b5755fc-2m84q"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.133981 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-559489d5f8-ngqx9"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.135428 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.145664 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.185315 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-559489d5f8-ngqx9"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.208206 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data-custom\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.208287 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glrwg\" (UniqueName: \"kubernetes.io/projected/f42a19be-1d8d-45f5-a92e-95b3fc416db7-kube-api-access-glrwg\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.208309 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-combined-ca-bundle\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.208346 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.208363 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42a19be-1d8d-45f5-a92e-95b3fc416db7-logs\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.217474 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-867cbf55-jx754"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.218885 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.289724 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-867cbf55-jx754"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333457 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glrwg\" (UniqueName: \"kubernetes.io/projected/f42a19be-1d8d-45f5-a92e-95b3fc416db7-kube-api-access-glrwg\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-combined-ca-bundle\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333587 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4aea1771-69e2-4735-b813-9a8214a2227c-logs\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333629 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333661 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42a19be-1d8d-45f5-a92e-95b3fc416db7-logs\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data-custom\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333751 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29x9d\" (UniqueName: \"kubernetes.io/projected/f42ba382-9e03-4f39-904e-87f4d764175c-kube-api-access-29x9d\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333787 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-combined-ca-bundle\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333812 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-combined-ca-bundle\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333908 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrbv\" (UniqueName: \"kubernetes.io/projected/4aea1771-69e2-4735-b813-9a8214a2227c-kube-api-access-jbrbv\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333938 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42ba382-9e03-4f39-904e-87f4d764175c-logs\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.334107 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.334141 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data-custom\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.334369 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data-custom\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.344407 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.344856 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42a19be-1d8d-45f5-a92e-95b3fc416db7-logs\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.346691 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data-custom\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.349208 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-combined-ca-bundle\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.352331 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-vznz8"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.352714 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" podUID="ca0fc508-843f-44b4-96a4-83072d14662c" containerName="dnsmasq-dns" containerID="cri-o://6fd5a1ea5010e663397764292f55e9aa3ae60e4cd8178288a9eb433b1fc97861" gracePeriod=10 Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.367070 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glrwg\" (UniqueName: \"kubernetes.io/projected/f42a19be-1d8d-45f5-a92e-95b3fc416db7-kube-api-access-glrwg\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.370261 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-2n9gl"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.372801 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.406873 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-79b8cb94b4-h6tqh"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.408768 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.420643 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.420739 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-2n9gl"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436053 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4aea1771-69e2-4735-b813-9a8214a2227c-logs\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436108 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvst\" (UniqueName: \"kubernetes.io/projected/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-kube-api-access-chvst\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436134 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-config\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436155 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data-custom\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436173 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f88cm\" (UniqueName: \"kubernetes.io/projected/6dbc8f44-c54c-42c0-8430-742c6bb61165-kube-api-access-f88cm\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436273 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29x9d\" (UniqueName: \"kubernetes.io/projected/f42ba382-9e03-4f39-904e-87f4d764175c-kube-api-access-29x9d\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436302 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-combined-ca-bundle\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436321 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436347 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-combined-ca-bundle\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436367 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-sb\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436386 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-swift-storage-0\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436405 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-svc\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436424 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data-custom\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436449 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436463 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-nb\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436483 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dbc8f44-c54c-42c0-8430-742c6bb61165-logs\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436510 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbrbv\" (UniqueName: \"kubernetes.io/projected/4aea1771-69e2-4735-b813-9a8214a2227c-kube-api-access-jbrbv\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436533 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42ba382-9e03-4f39-904e-87f4d764175c-logs\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436551 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-combined-ca-bundle\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436578 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436597 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data-custom\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436722 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4aea1771-69e2-4735-b813-9a8214a2227c-logs\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.438532 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42ba382-9e03-4f39-904e-87f4d764175c-logs\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.459474 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data-custom\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.463263 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data-custom\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.463462 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.466152 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-combined-ca-bundle\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.472479 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-combined-ca-bundle\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.474150 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbrbv\" (UniqueName: \"kubernetes.io/projected/4aea1771-69e2-4735-b813-9a8214a2227c-kube-api-access-jbrbv\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.474508 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29x9d\" (UniqueName: \"kubernetes.io/projected/f42ba382-9e03-4f39-904e-87f4d764175c-kube-api-access-29x9d\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.474588 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.482058 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-79b8cb94b4-h6tqh"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.490373 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.514471 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8678d9479b-vqsct"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.516358 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.526254 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.538295 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8678d9479b-vqsct"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539326 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539377 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-sb\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539398 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-swift-storage-0\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539418 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-svc\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539442 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data-custom\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-nb\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dbc8f44-c54c-42c0-8430-742c6bb61165-logs\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539521 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-combined-ca-bundle\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539587 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chvst\" (UniqueName: \"kubernetes.io/projected/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-kube-api-access-chvst\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539608 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-config\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539626 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f88cm\" (UniqueName: \"kubernetes.io/projected/6dbc8f44-c54c-42c0-8430-742c6bb61165-kube-api-access-f88cm\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.545833 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-nb\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.547031 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.547569 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-swift-storage-0\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.549363 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-sb\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.548593 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-config\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.548781 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dbc8f44-c54c-42c0-8430-742c6bb61165-logs\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.547843 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-svc\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.551740 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-combined-ca-bundle\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.554827 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data-custom\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.563465 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chvst\" (UniqueName: \"kubernetes.io/projected/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-kube-api-access-chvst\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.584401 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f88cm\" (UniqueName: \"kubernetes.io/projected/6dbc8f44-c54c-42c0-8430-742c6bb61165-kube-api-access-f88cm\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.641360 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2a71d7-f0a3-47e2-9594-303d2240043a-logs\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.641726 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-combined-ca-bundle\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.641918 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.645530 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data-custom\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.645781 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bncf\" (UniqueName: \"kubernetes.io/projected/1d2a71d7-f0a3-47e2-9594-303d2240043a-kube-api-access-6bncf\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.655664 4820 generic.go:334] "Generic (PLEG): container finished" podID="ca0fc508-843f-44b4-96a4-83072d14662c" containerID="6fd5a1ea5010e663397764292f55e9aa3ae60e4cd8178288a9eb433b1fc97861" exitCode=0 Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.655953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" event={"ID":"ca0fc508-843f-44b4-96a4-83072d14662c","Type":"ContainerDied","Data":"6fd5a1ea5010e663397764292f55e9aa3ae60e4cd8178288a9eb433b1fc97861"} Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.657320 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.674984 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dd5db455-fl7mt" event={"ID":"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47","Type":"ContainerStarted","Data":"302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0"} Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.675041 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dd5db455-fl7mt" event={"ID":"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47","Type":"ContainerStarted","Data":"73fe748c020d9cdb0f7411013cf334c00e8fbd8633affe05f3bd15d54091bf15"} Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.680608 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.695320 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.709891 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerStarted","Data":"a3ae24ef827f682ba1110ccd6e6f98b2ddca11c1d6ed5c47dbaeb182af499ae5"} Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.747522 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2a71d7-f0a3-47e2-9594-303d2240043a-logs\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.747709 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-combined-ca-bundle\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.747810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.748005 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data-custom\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.748136 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bncf\" (UniqueName: \"kubernetes.io/projected/1d2a71d7-f0a3-47e2-9594-303d2240043a-kube-api-access-6bncf\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.750107 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2a71d7-f0a3-47e2-9594-303d2240043a-logs\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.754873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.758479 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data-custom\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.766354 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bncf\" (UniqueName: \"kubernetes.io/projected/1d2a71d7-f0a3-47e2-9594-303d2240043a-kube-api-access-6bncf\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.769463 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-665c5b9dff-g2t96"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.773934 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-combined-ca-bundle\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.014144 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.096831 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-758b5755fc-2m84q"] Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.126310 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.198467 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-559489d5f8-ngqx9"] Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.260328 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-swift-storage-0\") pod \"ca0fc508-843f-44b4-96a4-83072d14662c\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.260360 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-svc\") pod \"ca0fc508-843f-44b4-96a4-83072d14662c\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.260392 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-config\") pod \"ca0fc508-843f-44b4-96a4-83072d14662c\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.260461 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mc4g\" (UniqueName: \"kubernetes.io/projected/ca0fc508-843f-44b4-96a4-83072d14662c-kube-api-access-8mc4g\") pod \"ca0fc508-843f-44b4-96a4-83072d14662c\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.260553 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-nb\") pod \"ca0fc508-843f-44b4-96a4-83072d14662c\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.260577 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-sb\") pod \"ca0fc508-843f-44b4-96a4-83072d14662c\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.273201 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca0fc508-843f-44b4-96a4-83072d14662c-kube-api-access-8mc4g" (OuterVolumeSpecName: "kube-api-access-8mc4g") pod "ca0fc508-843f-44b4-96a4-83072d14662c" (UID: "ca0fc508-843f-44b4-96a4-83072d14662c"). InnerVolumeSpecName "kube-api-access-8mc4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.367459 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mc4g\" (UniqueName: \"kubernetes.io/projected/ca0fc508-843f-44b4-96a4-83072d14662c-kube-api-access-8mc4g\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.407093 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ca0fc508-843f-44b4-96a4-83072d14662c" (UID: "ca0fc508-843f-44b4-96a4-83072d14662c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.418736 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca0fc508-843f-44b4-96a4-83072d14662c" (UID: "ca0fc508-843f-44b4-96a4-83072d14662c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.419034 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ca0fc508-843f-44b4-96a4-83072d14662c" (UID: "ca0fc508-843f-44b4-96a4-83072d14662c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.419652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ca0fc508-843f-44b4-96a4-83072d14662c" (UID: "ca0fc508-843f-44b4-96a4-83072d14662c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.423921 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-config" (OuterVolumeSpecName: "config") pod "ca0fc508-843f-44b4-96a4-83072d14662c" (UID: "ca0fc508-843f-44b4-96a4-83072d14662c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.477474 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.477504 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.477516 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.477527 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.477535 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.706911 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-867cbf55-jx754"] Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.746366 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-867cbf55-jx754" event={"ID":"f42ba382-9e03-4f39-904e-87f4d764175c","Type":"ContainerStarted","Data":"232aac902ab163c61332ca9251f3b8bd22a0d25dd116a7153f1bb796d475d539"} Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.749778 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8678d9479b-vqsct"] Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.755996 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-758b5755fc-2m84q" event={"ID":"f42a19be-1d8d-45f5-a92e-95b3fc416db7","Type":"ContainerStarted","Data":"fb422d822e894d47a3283c85ceaf5b546e6dcaf88367608fcf5a454edd87769f"} Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.771461 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-79b8cb94b4-h6tqh"] Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.773458 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.773475 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" event={"ID":"ca0fc508-843f-44b4-96a4-83072d14662c","Type":"ContainerDied","Data":"605148f1eaf42ee14c34a0ed19827ae005b65705aabb872c28cb0ecdd7dd5d16"} Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.773982 4820 scope.go:117] "RemoveContainer" containerID="6fd5a1ea5010e663397764292f55e9aa3ae60e4cd8178288a9eb433b1fc97861" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.775505 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-665c5b9dff-g2t96" event={"ID":"16ebfdb2-72a8-40c6-b0ed-012f138025b2","Type":"ContainerStarted","Data":"200807455a2947c5b934674313e4af887e6f6944441305fbe4c73423e4c5c754"} Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.775553 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-665c5b9dff-g2t96" event={"ID":"16ebfdb2-72a8-40c6-b0ed-012f138025b2","Type":"ContainerStarted","Data":"64f0896a03976792d3631a63a19b92a0be5d44121ab07ab2ac5e458129f71510"} Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.776223 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.780566 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" event={"ID":"4aea1771-69e2-4735-b813-9a8214a2227c","Type":"ContainerStarted","Data":"e37ea0169f5b1d136331cf197b065bd27c292073bd4e6a5a36c7265891cbd6b0"} Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.786926 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dd5db455-fl7mt" event={"ID":"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47","Type":"ContainerStarted","Data":"a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47"} Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.787215 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.821777 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-665c5b9dff-g2t96" podStartSLOduration=2.821757826 podStartE2EDuration="2.821757826s" podCreationTimestamp="2026-02-21 07:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:00.805787091 +0000 UTC m=+1135.838871299" watchObservedRunningTime="2026-02-21 07:06:00.821757826 +0000 UTC m=+1135.854842024" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.842006 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-85dd5db455-fl7mt" podStartSLOduration=4.841986639 podStartE2EDuration="4.841986639s" podCreationTimestamp="2026-02-21 07:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:00.832277454 +0000 UTC m=+1135.865361652" watchObservedRunningTime="2026-02-21 07:06:00.841986639 +0000 UTC m=+1135.875070837" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.860406 4820 scope.go:117] "RemoveContainer" containerID="d3650ad3fcb47b7036345aad44f6f82c057e9abcdc67b5b3bbf796f227d25110" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.870789 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-vznz8"] Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.881493 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-vznz8"] Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.967559 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-2n9gl"] Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.739359 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca0fc508-843f-44b4-96a4-83072d14662c" path="/var/lib/kubelet/pods/ca0fc508-843f-44b4-96a4-83072d14662c/volumes" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.842445 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8678d9479b-vqsct" event={"ID":"1d2a71d7-f0a3-47e2-9594-303d2240043a","Type":"ContainerStarted","Data":"eefd2f11abd4007503e1948f668682aa77a956178ce8940a23b4f0cd82e4017d"} Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.842504 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8678d9479b-vqsct" event={"ID":"1d2a71d7-f0a3-47e2-9594-303d2240043a","Type":"ContainerStarted","Data":"9aecb2af3009bc608fea166750c2b4fb589074d7853e9934bcc3f142b21868ec"} Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.842515 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8678d9479b-vqsct" event={"ID":"1d2a71d7-f0a3-47e2-9594-303d2240043a","Type":"ContainerStarted","Data":"5bed7530faf105f1f1bc8124a0e0b6da645917e74dd6cbd033eab92c51acc5f7"} Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.843633 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.843656 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.845633 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-76b79c9766-s694g"] Feb 21 07:06:01 crc kubenswrapper[4820]: E0221 07:06:01.846080 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0fc508-843f-44b4-96a4-83072d14662c" containerName="init" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.846093 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0fc508-843f-44b4-96a4-83072d14662c" containerName="init" Feb 21 07:06:01 crc kubenswrapper[4820]: E0221 07:06:01.846116 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0fc508-843f-44b4-96a4-83072d14662c" containerName="dnsmasq-dns" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.846123 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0fc508-843f-44b4-96a4-83072d14662c" containerName="dnsmasq-dns" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.846347 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca0fc508-843f-44b4-96a4-83072d14662c" containerName="dnsmasq-dns" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.847538 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.849378 4820 generic.go:334] "Generic (PLEG): container finished" podID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerID="6e603615eb6f8aebb5fc0a7934eddaf580b840ae971a07039f0c0c6049a9ef38" exitCode=0 Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.849455 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" event={"ID":"29aae534-5c23-4125-a6c1-57b4bd7a2a4c","Type":"ContainerDied","Data":"6e603615eb6f8aebb5fc0a7934eddaf580b840ae971a07039f0c0c6049a9ef38"} Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.849477 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" event={"ID":"29aae534-5c23-4125-a6c1-57b4bd7a2a4c","Type":"ContainerStarted","Data":"f12c1a8e0db096347f19d2697b9e9331aac42f90a3217a3038a39188f188a441"} Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.855221 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.857910 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.868188 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" event={"ID":"6dbc8f44-c54c-42c0-8430-742c6bb61165","Type":"ContainerStarted","Data":"9e23535ae9303b01da633c9a5de5b1cca080fe7244d856307bd78e440fdb1a72"} Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.910892 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76b79c9766-s694g"] Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.912111 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8678d9479b-vqsct" podStartSLOduration=2.91209375 podStartE2EDuration="2.91209375s" podCreationTimestamp="2026-02-21 07:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:01.878200965 +0000 UTC m=+1136.911285163" watchObservedRunningTime="2026-02-21 07:06:01.91209375 +0000 UTC m=+1136.945177938" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.922841 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.922886 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.931815 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4709782f-54e7-4a78-a56e-8f58a5556501-logs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.931862 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-combined-ca-bundle\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.931912 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data-custom\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.932249 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-internal-tls-certs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.932278 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.932319 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmf6b\" (UniqueName: \"kubernetes.io/projected/4709782f-54e7-4a78-a56e-8f58a5556501-kube-api-access-xmf6b\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.932345 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-public-tls-certs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.952336 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.960674 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.960731 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.981539 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.016013 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.022500 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.035108 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-internal-tls-certs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.035161 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.035188 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmf6b\" (UniqueName: \"kubernetes.io/projected/4709782f-54e7-4a78-a56e-8f58a5556501-kube-api-access-xmf6b\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.035210 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-public-tls-certs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.035261 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4709782f-54e7-4a78-a56e-8f58a5556501-logs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.035281 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-combined-ca-bundle\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.035319 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data-custom\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.038087 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4709782f-54e7-4a78-a56e-8f58a5556501-logs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.047994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-combined-ca-bundle\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.048543 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data-custom\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.050164 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-public-tls-certs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.051027 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.051881 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-internal-tls-certs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.065677 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmf6b\" (UniqueName: \"kubernetes.io/projected/4709782f-54e7-4a78-a56e-8f58a5556501-kube-api-access-xmf6b\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.325997 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.887977 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.888036 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.888064 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.888330 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 07:06:03 crc kubenswrapper[4820]: I0221 07:06:03.907202 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" event={"ID":"29aae534-5c23-4125-a6c1-57b4bd7a2a4c","Type":"ContainerStarted","Data":"d21e5362f3bdef1222d983791df13fcb26aee43c220da6058c8541e05112d6b5"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.216007 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" podStartSLOduration=5.215982542 podStartE2EDuration="5.215982542s" podCreationTimestamp="2026-02-21 07:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:03.939248096 +0000 UTC m=+1138.972332304" watchObservedRunningTime="2026-02-21 07:06:04.215982542 +0000 UTC m=+1139.249066750" Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.219827 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76b79c9766-s694g"] Feb 21 07:06:04 crc kubenswrapper[4820]: W0221 07:06:04.228930 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4709782f_54e7_4a78_a56e_8f58a5556501.slice/crio-c6f2a45c53e61599400b77234d2708a02138c72044bb416b4b3506cba8df90b6 WatchSource:0}: Error finding container c6f2a45c53e61599400b77234d2708a02138c72044bb416b4b3506cba8df90b6: Status 404 returned error can't find the container with id c6f2a45c53e61599400b77234d2708a02138c72044bb416b4b3506cba8df90b6 Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.683526 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.925755 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-867cbf55-jx754" event={"ID":"f42ba382-9e03-4f39-904e-87f4d764175c","Type":"ContainerStarted","Data":"cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.930050 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b79c9766-s694g" event={"ID":"4709782f-54e7-4a78-a56e-8f58a5556501","Type":"ContainerStarted","Data":"d5d4ebfd3d862ab82dd24efdb0236db9cf326c55f3fab0e5ba28750a426c7f68"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.930430 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b79c9766-s694g" event={"ID":"4709782f-54e7-4a78-a56e-8f58a5556501","Type":"ContainerStarted","Data":"c6f2a45c53e61599400b77234d2708a02138c72044bb416b4b3506cba8df90b6"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.932457 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-758b5755fc-2m84q" event={"ID":"f42a19be-1d8d-45f5-a92e-95b3fc416db7","Type":"ContainerStarted","Data":"a26ae2476946948a1aed61bb4a1df1c583b774309171588d19e62631081c841e"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.933958 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" event={"ID":"6dbc8f44-c54c-42c0-8430-742c6bb61165","Type":"ContainerStarted","Data":"3778b0182306b15cbf9e09e147e68dd7624053483e32182b3d2bbe64c15bf395"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.935521 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wdvf7" event={"ID":"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744","Type":"ContainerStarted","Data":"e889c593ed0d71d0bd8a837d661899903d747301909f78ed5da991ce6eccf229"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.938868 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" event={"ID":"4aea1771-69e2-4735-b813-9a8214a2227c","Type":"ContainerStarted","Data":"669ae930f380ee63a72d09d4f6912014d9f3d432369709db082d870a245ab5bb"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.938944 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.938955 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.939391 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.939406 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 07:06:05 crc kubenswrapper[4820]: I0221 07:06:05.773894 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-wdvf7" podStartSLOduration=3.859869468 podStartE2EDuration="38.773874344s" podCreationTimestamp="2026-02-21 07:05:27 +0000 UTC" firstStartedPulling="2026-02-21 07:05:28.517953408 +0000 UTC m=+1103.551037596" lastFinishedPulling="2026-02-21 07:06:03.431958274 +0000 UTC m=+1138.465042472" observedRunningTime="2026-02-21 07:06:04.951519998 +0000 UTC m=+1139.984604196" watchObservedRunningTime="2026-02-21 07:06:05.773874344 +0000 UTC m=+1140.806958542" Feb 21 07:06:05 crc kubenswrapper[4820]: I0221 07:06:05.861723 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 07:06:05 crc kubenswrapper[4820]: I0221 07:06:05.877893 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 07:06:05 crc kubenswrapper[4820]: I0221 07:06:05.998455 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b79c9766-s694g" event={"ID":"4709782f-54e7-4a78-a56e-8f58a5556501","Type":"ContainerStarted","Data":"84344b3d5ae53a06ac9828132a33cafdbcfdeafdabeded21cd72b5eb2ec97792"} Feb 21 07:06:05 crc kubenswrapper[4820]: I0221 07:06:05.998533 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:05 crc kubenswrapper[4820]: I0221 07:06:05.998555 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.031500 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-758b5755fc-2m84q" event={"ID":"f42a19be-1d8d-45f5-a92e-95b3fc416db7","Type":"ContainerStarted","Data":"88c3991a2e310c1f9f3f33dda774b93f30bbb6da073076be213f21212b50f54c"} Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.037902 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" event={"ID":"6dbc8f44-c54c-42c0-8430-742c6bb61165","Type":"ContainerStarted","Data":"df3a8b6f8128140f50c80025c22d3b291ab89d34796d0307384acb7c6dbbcc96"} Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.045951 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-76b79c9766-s694g" podStartSLOduration=5.045931542 podStartE2EDuration="5.045931542s" podCreationTimestamp="2026-02-21 07:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:06.035889598 +0000 UTC m=+1141.068973796" watchObservedRunningTime="2026-02-21 07:06:06.045931542 +0000 UTC m=+1141.079015740" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.046815 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" event={"ID":"4aea1771-69e2-4735-b813-9a8214a2227c","Type":"ContainerStarted","Data":"629f3b6e3f1bcb0984983a5587dbd2bcaca3ff8f40db0958e208860ec30ff25e"} Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.071911 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-867cbf55-jx754" event={"ID":"f42ba382-9e03-4f39-904e-87f4d764175c","Type":"ContainerStarted","Data":"53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5"} Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.078859 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.078949 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.086122 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-758b5755fc-2m84q" podStartSLOduration=3.790335042 podStartE2EDuration="7.086097129s" podCreationTimestamp="2026-02-21 07:05:59 +0000 UTC" firstStartedPulling="2026-02-21 07:06:00.126747288 +0000 UTC m=+1135.159831486" lastFinishedPulling="2026-02-21 07:06:03.422509375 +0000 UTC m=+1138.455593573" observedRunningTime="2026-02-21 07:06:06.058384842 +0000 UTC m=+1141.091469040" watchObservedRunningTime="2026-02-21 07:06:06.086097129 +0000 UTC m=+1141.119181327" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.096307 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" podStartSLOduration=3.9100293 podStartE2EDuration="7.096287487s" podCreationTimestamp="2026-02-21 07:05:59 +0000 UTC" firstStartedPulling="2026-02-21 07:06:00.245708197 +0000 UTC m=+1135.278792395" lastFinishedPulling="2026-02-21 07:06:03.431966384 +0000 UTC m=+1138.465050582" observedRunningTime="2026-02-21 07:06:06.087653112 +0000 UTC m=+1141.120737310" watchObservedRunningTime="2026-02-21 07:06:06.096287487 +0000 UTC m=+1141.129371685" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.159248 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" podStartSLOduration=4.807066576 podStartE2EDuration="7.159217966s" podCreationTimestamp="2026-02-21 07:05:59 +0000 UTC" firstStartedPulling="2026-02-21 07:06:00.882458185 +0000 UTC m=+1135.915542383" lastFinishedPulling="2026-02-21 07:06:03.234609575 +0000 UTC m=+1138.267693773" observedRunningTime="2026-02-21 07:06:06.118143275 +0000 UTC m=+1141.151227473" watchObservedRunningTime="2026-02-21 07:06:06.159217966 +0000 UTC m=+1141.192302154" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.173853 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-559489d5f8-ngqx9"] Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.194143 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-867cbf55-jx754" podStartSLOduration=4.315637676 podStartE2EDuration="7.194123739s" podCreationTimestamp="2026-02-21 07:05:59 +0000 UTC" firstStartedPulling="2026-02-21 07:06:00.722204188 +0000 UTC m=+1135.755288386" lastFinishedPulling="2026-02-21 07:06:03.600690251 +0000 UTC m=+1138.633774449" observedRunningTime="2026-02-21 07:06:06.182549433 +0000 UTC m=+1141.215633651" watchObservedRunningTime="2026-02-21 07:06:06.194123739 +0000 UTC m=+1141.227207937" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.223479 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-758b5755fc-2m84q"] Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.429838 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:07 crc kubenswrapper[4820]: I0221 07:06:07.839491 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:08 crc kubenswrapper[4820]: I0221 07:06:08.094567 4820 generic.go:334] "Generic (PLEG): container finished" podID="e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" containerID="e889c593ed0d71d0bd8a837d661899903d747301909f78ed5da991ce6eccf229" exitCode=0 Feb 21 07:06:08 crc kubenswrapper[4820]: I0221 07:06:08.095033 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener-log" containerID="cri-o://669ae930f380ee63a72d09d4f6912014d9f3d432369709db082d870a245ab5bb" gracePeriod=30 Feb 21 07:06:08 crc kubenswrapper[4820]: I0221 07:06:08.094704 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wdvf7" event={"ID":"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744","Type":"ContainerDied","Data":"e889c593ed0d71d0bd8a837d661899903d747301909f78ed5da991ce6eccf229"} Feb 21 07:06:08 crc kubenswrapper[4820]: I0221 07:06:08.095170 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener" containerID="cri-o://629f3b6e3f1bcb0984983a5587dbd2bcaca3ff8f40db0958e208860ec30ff25e" gracePeriod=30 Feb 21 07:06:08 crc kubenswrapper[4820]: I0221 07:06:08.095396 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-758b5755fc-2m84q" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker-log" containerID="cri-o://a26ae2476946948a1aed61bb4a1df1c583b774309171588d19e62631081c841e" gracePeriod=30 Feb 21 07:06:08 crc kubenswrapper[4820]: I0221 07:06:08.095460 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-758b5755fc-2m84q" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker" containerID="cri-o://88c3991a2e310c1f9f3f33dda774b93f30bbb6da073076be213f21212b50f54c" gracePeriod=30 Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.115114 4820 generic.go:334] "Generic (PLEG): container finished" podID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerID="88c3991a2e310c1f9f3f33dda774b93f30bbb6da073076be213f21212b50f54c" exitCode=0 Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.115174 4820 generic.go:334] "Generic (PLEG): container finished" podID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerID="a26ae2476946948a1aed61bb4a1df1c583b774309171588d19e62631081c841e" exitCode=143 Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.115206 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-758b5755fc-2m84q" event={"ID":"f42a19be-1d8d-45f5-a92e-95b3fc416db7","Type":"ContainerDied","Data":"88c3991a2e310c1f9f3f33dda774b93f30bbb6da073076be213f21212b50f54c"} Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.115292 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-758b5755fc-2m84q" event={"ID":"f42a19be-1d8d-45f5-a92e-95b3fc416db7","Type":"ContainerDied","Data":"a26ae2476946948a1aed61bb4a1df1c583b774309171588d19e62631081c841e"} Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.117180 4820 generic.go:334] "Generic (PLEG): container finished" podID="4aea1771-69e2-4735-b813-9a8214a2227c" containerID="629f3b6e3f1bcb0984983a5587dbd2bcaca3ff8f40db0958e208860ec30ff25e" exitCode=0 Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.117269 4820 generic.go:334] "Generic (PLEG): container finished" podID="4aea1771-69e2-4735-b813-9a8214a2227c" containerID="669ae930f380ee63a72d09d4f6912014d9f3d432369709db082d870a245ab5bb" exitCode=143 Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.117422 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" event={"ID":"4aea1771-69e2-4735-b813-9a8214a2227c","Type":"ContainerDied","Data":"629f3b6e3f1bcb0984983a5587dbd2bcaca3ff8f40db0958e208860ec30ff25e"} Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.117529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" event={"ID":"4aea1771-69e2-4735-b813-9a8214a2227c","Type":"ContainerDied","Data":"669ae930f380ee63a72d09d4f6912014d9f3d432369709db082d870a245ab5bb"} Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.556639 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.683108 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.749043 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-dvmcz"] Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.749333 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" containerName="dnsmasq-dns" containerID="cri-o://fde577041ba66e346a36a4b20611073001e0ace822b909662a854ab13a1c8173" gracePeriod=10 Feb 21 07:06:10 crc kubenswrapper[4820]: I0221 07:06:10.133629 4820 generic.go:334] "Generic (PLEG): container finished" podID="375bfff4-76af-4f71-a665-c409feeb6f67" containerID="fde577041ba66e346a36a4b20611073001e0ace822b909662a854ab13a1c8173" exitCode=0 Feb 21 07:06:10 crc kubenswrapper[4820]: I0221 07:06:10.133678 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" event={"ID":"375bfff4-76af-4f71-a665-c409feeb6f67","Type":"ContainerDied","Data":"fde577041ba66e346a36a4b20611073001e0ace822b909662a854ab13a1c8173"} Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.111364 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: connect: connection refused" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.337421 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wdvf7" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.464033 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcn9d\" (UniqueName: \"kubernetes.io/projected/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-kube-api-access-tcn9d\") pod \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.464442 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-combined-ca-bundle\") pod \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.464573 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-config-data\") pod \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.464707 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-logs\") pod \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.464914 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-scripts\") pod \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.465677 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-logs" (OuterVolumeSpecName: "logs") pod "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" (UID: "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.469004 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-kube-api-access-tcn9d" (OuterVolumeSpecName: "kube-api-access-tcn9d") pod "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" (UID: "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744"). InnerVolumeSpecName "kube-api-access-tcn9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.471391 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-scripts" (OuterVolumeSpecName: "scripts") pod "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" (UID: "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.502040 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-config-data" (OuterVolumeSpecName: "config-data") pod "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" (UID: "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.503511 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" (UID: "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.534228 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.567838 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.567865 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.567875 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.567903 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.567915 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcn9d\" (UniqueName: \"kubernetes.io/projected/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-kube-api-access-tcn9d\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: E0221 07:06:13.663060 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.669086 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-combined-ca-bundle\") pod \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.669196 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data\") pod \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.669370 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glrwg\" (UniqueName: \"kubernetes.io/projected/f42a19be-1d8d-45f5-a92e-95b3fc416db7-kube-api-access-glrwg\") pod \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.669439 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42a19be-1d8d-45f5-a92e-95b3fc416db7-logs\") pod \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.669523 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data-custom\") pod \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.677634 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f42a19be-1d8d-45f5-a92e-95b3fc416db7-kube-api-access-glrwg" (OuterVolumeSpecName: "kube-api-access-glrwg") pod "f42a19be-1d8d-45f5-a92e-95b3fc416db7" (UID: "f42a19be-1d8d-45f5-a92e-95b3fc416db7"). InnerVolumeSpecName "kube-api-access-glrwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.677721 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f42a19be-1d8d-45f5-a92e-95b3fc416db7-logs" (OuterVolumeSpecName: "logs") pod "f42a19be-1d8d-45f5-a92e-95b3fc416db7" (UID: "f42a19be-1d8d-45f5-a92e-95b3fc416db7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.689654 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f42a19be-1d8d-45f5-a92e-95b3fc416db7" (UID: "f42a19be-1d8d-45f5-a92e-95b3fc416db7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.721736 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.733137 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f42a19be-1d8d-45f5-a92e-95b3fc416db7" (UID: "f42a19be-1d8d-45f5-a92e-95b3fc416db7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.736194 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.763689 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data" (OuterVolumeSpecName: "config-data") pod "f42a19be-1d8d-45f5-a92e-95b3fc416db7" (UID: "f42a19be-1d8d-45f5-a92e-95b3fc416db7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.773027 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4aea1771-69e2-4735-b813-9a8214a2227c-logs\") pod \"4aea1771-69e2-4735-b813-9a8214a2227c\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.773220 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbrbv\" (UniqueName: \"kubernetes.io/projected/4aea1771-69e2-4735-b813-9a8214a2227c-kube-api-access-jbrbv\") pod \"4aea1771-69e2-4735-b813-9a8214a2227c\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.773330 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-combined-ca-bundle\") pod \"4aea1771-69e2-4735-b813-9a8214a2227c\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.773640 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data-custom\") pod \"4aea1771-69e2-4735-b813-9a8214a2227c\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.773710 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data\") pod \"4aea1771-69e2-4735-b813-9a8214a2227c\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.776536 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.776766 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.776791 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glrwg\" (UniqueName: \"kubernetes.io/projected/f42a19be-1d8d-45f5-a92e-95b3fc416db7-kube-api-access-glrwg\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.776805 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42a19be-1d8d-45f5-a92e-95b3fc416db7-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.777059 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.781389 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aea1771-69e2-4735-b813-9a8214a2227c-logs" (OuterVolumeSpecName: "logs") pod "4aea1771-69e2-4735-b813-9a8214a2227c" (UID: "4aea1771-69e2-4735-b813-9a8214a2227c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.783399 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aea1771-69e2-4735-b813-9a8214a2227c-kube-api-access-jbrbv" (OuterVolumeSpecName: "kube-api-access-jbrbv") pod "4aea1771-69e2-4735-b813-9a8214a2227c" (UID: "4aea1771-69e2-4735-b813-9a8214a2227c"). InnerVolumeSpecName "kube-api-access-jbrbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.783497 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4aea1771-69e2-4735-b813-9a8214a2227c" (UID: "4aea1771-69e2-4735-b813-9a8214a2227c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.799785 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4aea1771-69e2-4735-b813-9a8214a2227c" (UID: "4aea1771-69e2-4735-b813-9a8214a2227c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.826411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data" (OuterVolumeSpecName: "config-data") pod "4aea1771-69e2-4735-b813-9a8214a2227c" (UID: "4aea1771-69e2-4735-b813-9a8214a2227c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.843811 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.878864 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-nb\") pod \"375bfff4-76af-4f71-a665-c409feeb6f67\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879017 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbqjs\" (UniqueName: \"kubernetes.io/projected/375bfff4-76af-4f71-a665-c409feeb6f67-kube-api-access-mbqjs\") pod \"375bfff4-76af-4f71-a665-c409feeb6f67\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879051 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-swift-storage-0\") pod \"375bfff4-76af-4f71-a665-c409feeb6f67\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879100 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-config\") pod \"375bfff4-76af-4f71-a665-c409feeb6f67\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879129 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-svc\") pod \"375bfff4-76af-4f71-a665-c409feeb6f67\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879164 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-sb\") pod \"375bfff4-76af-4f71-a665-c409feeb6f67\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879807 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4aea1771-69e2-4735-b813-9a8214a2227c-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879829 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbrbv\" (UniqueName: \"kubernetes.io/projected/4aea1771-69e2-4735-b813-9a8214a2227c-kube-api-access-jbrbv\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879841 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879850 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879860 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.883136 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/375bfff4-76af-4f71-a665-c409feeb6f67-kube-api-access-mbqjs" (OuterVolumeSpecName: "kube-api-access-mbqjs") pod "375bfff4-76af-4f71-a665-c409feeb6f67" (UID: "375bfff4-76af-4f71-a665-c409feeb6f67"). InnerVolumeSpecName "kube-api-access-mbqjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.928456 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "375bfff4-76af-4f71-a665-c409feeb6f67" (UID: "375bfff4-76af-4f71-a665-c409feeb6f67"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.928966 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "375bfff4-76af-4f71-a665-c409feeb6f67" (UID: "375bfff4-76af-4f71-a665-c409feeb6f67"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.933624 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-config" (OuterVolumeSpecName: "config") pod "375bfff4-76af-4f71-a665-c409feeb6f67" (UID: "375bfff4-76af-4f71-a665-c409feeb6f67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.936960 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "375bfff4-76af-4f71-a665-c409feeb6f67" (UID: "375bfff4-76af-4f71-a665-c409feeb6f67"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.948909 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "375bfff4-76af-4f71-a665-c409feeb6f67" (UID: "375bfff4-76af-4f71-a665-c409feeb6f67"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.982375 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.982418 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.982431 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.982444 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.982456 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbqjs\" (UniqueName: \"kubernetes.io/projected/375bfff4-76af-4f71-a665-c409feeb6f67-kube-api-access-mbqjs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.982467 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.986215 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.045257 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8678d9479b-vqsct"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.045524 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8678d9479b-vqsct" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api-log" containerID="cri-o://9aecb2af3009bc608fea166750c2b4fb589074d7853e9934bcc3f142b21868ec" gracePeriod=30 Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.045912 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8678d9479b-vqsct" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api" containerID="cri-o://eefd2f11abd4007503e1948f668682aa77a956178ce8940a23b4f0cd82e4017d" gracePeriod=30 Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.207313 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" event={"ID":"4aea1771-69e2-4735-b813-9a8214a2227c","Type":"ContainerDied","Data":"e37ea0169f5b1d136331cf197b065bd27c292073bd4e6a5a36c7265891cbd6b0"} Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.207361 4820 scope.go:117] "RemoveContainer" containerID="629f3b6e3f1bcb0984983a5587dbd2bcaca3ff8f40db0958e208860ec30ff25e" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.207489 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.215517 4820 generic.go:334] "Generic (PLEG): container finished" podID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerID="9aecb2af3009bc608fea166750c2b4fb589074d7853e9934bcc3f142b21868ec" exitCode=143 Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.215589 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8678d9479b-vqsct" event={"ID":"1d2a71d7-f0a3-47e2-9594-303d2240043a","Type":"ContainerDied","Data":"9aecb2af3009bc608fea166750c2b4fb589074d7853e9934bcc3f142b21868ec"} Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.218492 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wdvf7" event={"ID":"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744","Type":"ContainerDied","Data":"cc4fb96b39e1936b86af57f1db3fb5919410cadbcdd356f24dc36d2766a16bc7"} Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.218523 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc4fb96b39e1936b86af57f1db3fb5919410cadbcdd356f24dc36d2766a16bc7" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.218763 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wdvf7" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.233803 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" event={"ID":"375bfff4-76af-4f71-a665-c409feeb6f67","Type":"ContainerDied","Data":"0deee66ca0c914e04051643e2ef7f61bf67d60020463554eb611d4a4dbdb4fc8"} Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.233899 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.244644 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-758b5755fc-2m84q" event={"ID":"f42a19be-1d8d-45f5-a92e-95b3fc416db7","Type":"ContainerDied","Data":"fb422d822e894d47a3283c85ceaf5b546e6dcaf88367608fcf5a454edd87769f"} Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.245193 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.257885 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="ceilometer-notification-agent" containerID="cri-o://9bdbaee398862dd842b50a8bb04bbb6638f1a2775f7a43c604f1191103118805" gracePeriod=30 Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.258273 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerStarted","Data":"cb629ad0bbb7b9acd7f005e921ba221a260f2358550412aca3a0d13dac46f4b7"} Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.258469 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.258759 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="proxy-httpd" containerID="cri-o://cb629ad0bbb7b9acd7f005e921ba221a260f2358550412aca3a0d13dac46f4b7" gracePeriod=30 Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.258865 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="sg-core" containerID="cri-o://a3ae24ef827f682ba1110ccd6e6f98b2ddca11c1d6ed5c47dbaeb182af499ae5" gracePeriod=30 Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.325510 4820 scope.go:117] "RemoveContainer" containerID="669ae930f380ee63a72d09d4f6912014d9f3d432369709db082d870a245ab5bb" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.350596 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-dvmcz"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.353595 4820 scope.go:117] "RemoveContainer" containerID="fde577041ba66e346a36a4b20611073001e0ace822b909662a854ab13a1c8173" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.354097 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-dvmcz"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.374044 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-559489d5f8-ngqx9"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.384369 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-559489d5f8-ngqx9"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.418052 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-758b5755fc-2m84q"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.430924 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-758b5755fc-2m84q"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.432599 4820 scope.go:117] "RemoveContainer" containerID="18dc85665c905eaff86848c97e8cbe825cac87dcc411dd90b770e67c8c997f65" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.474563 4820 scope.go:117] "RemoveContainer" containerID="88c3991a2e310c1f9f3f33dda774b93f30bbb6da073076be213f21212b50f54c" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507509 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85cb846b98-bwgbn"] Feb 21 07:06:14 crc kubenswrapper[4820]: E0221 07:06:14.507852 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507868 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener" Feb 21 07:06:14 crc kubenswrapper[4820]: E0221 07:06:14.507882 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507888 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker" Feb 21 07:06:14 crc kubenswrapper[4820]: E0221 07:06:14.507896 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" containerName="init" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507903 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" containerName="init" Feb 21 07:06:14 crc kubenswrapper[4820]: E0221 07:06:14.507915 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker-log" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507921 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker-log" Feb 21 07:06:14 crc kubenswrapper[4820]: E0221 07:06:14.507937 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" containerName="placement-db-sync" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507943 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" containerName="placement-db-sync" Feb 21 07:06:14 crc kubenswrapper[4820]: E0221 07:06:14.507955 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" containerName="dnsmasq-dns" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507961 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" containerName="dnsmasq-dns" Feb 21 07:06:14 crc kubenswrapper[4820]: E0221 07:06:14.507971 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener-log" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507977 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener-log" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.508133 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener-log" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.508301 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" containerName="dnsmasq-dns" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.508332 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.508350 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker-log" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.508358 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.508370 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" containerName="placement-db-sync" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.509299 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.511073 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.511404 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.512489 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.512907 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.514822 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-p47r7" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.527280 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85cb846b98-bwgbn"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.577124 4820 scope.go:117] "RemoveContainer" containerID="a26ae2476946948a1aed61bb4a1df1c583b774309171588d19e62631081c841e" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.595370 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-combined-ca-bundle\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.595422 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-scripts\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.595481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-logs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.595584 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-public-tls-certs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.595745 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-internal-tls-certs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.595829 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnmpt\" (UniqueName: \"kubernetes.io/projected/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-kube-api-access-gnmpt\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.595856 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-config-data\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.697696 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-public-tls-certs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.697840 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-internal-tls-certs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.697927 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnmpt\" (UniqueName: \"kubernetes.io/projected/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-kube-api-access-gnmpt\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.697963 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-config-data\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.698009 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-combined-ca-bundle\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.698067 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-scripts\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.698167 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-logs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.698683 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-logs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.703353 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-public-tls-certs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.703616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-config-data\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.703768 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-combined-ca-bundle\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.703880 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-scripts\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.703953 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-internal-tls-certs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.725753 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnmpt\" (UniqueName: \"kubernetes.io/projected/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-kube-api-access-gnmpt\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.861868 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.265897 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vfn4b" event={"ID":"b400c916-2ba9-4d7e-b9f5-6044605f279c","Type":"ContainerStarted","Data":"902a90534639057fe4891bc5ba6d70d20ddb57a4bac2175eb285eb30ef1ad8ea"} Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.270592 4820 generic.go:334] "Generic (PLEG): container finished" podID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerID="cb629ad0bbb7b9acd7f005e921ba221a260f2358550412aca3a0d13dac46f4b7" exitCode=0 Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.270617 4820 generic.go:334] "Generic (PLEG): container finished" podID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerID="a3ae24ef827f682ba1110ccd6e6f98b2ddca11c1d6ed5c47dbaeb182af499ae5" exitCode=2 Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.270651 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerDied","Data":"cb629ad0bbb7b9acd7f005e921ba221a260f2358550412aca3a0d13dac46f4b7"} Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.270669 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerDied","Data":"a3ae24ef827f682ba1110ccd6e6f98b2ddca11c1d6ed5c47dbaeb182af499ae5"} Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.284424 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-vfn4b" podStartSLOduration=3.446484431 podStartE2EDuration="48.284407318s" podCreationTimestamp="2026-02-21 07:05:27 +0000 UTC" firstStartedPulling="2026-02-21 07:05:28.517916497 +0000 UTC m=+1103.551000695" lastFinishedPulling="2026-02-21 07:06:13.355839384 +0000 UTC m=+1148.388923582" observedRunningTime="2026-02-21 07:06:15.280740378 +0000 UTC m=+1150.313824576" watchObservedRunningTime="2026-02-21 07:06:15.284407318 +0000 UTC m=+1150.317491516" Feb 21 07:06:15 crc kubenswrapper[4820]: W0221 07:06:15.357997 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce7e0258_f8e3_4e7c_8a4d_aec3ee4d2ffe.slice/crio-d03e1e7ca4ed1e17b146f874e8b7c512ef17ac2c552ca38fe854b5ace6b4ef08 WatchSource:0}: Error finding container d03e1e7ca4ed1e17b146f874e8b7c512ef17ac2c552ca38fe854b5ace6b4ef08: Status 404 returned error can't find the container with id d03e1e7ca4ed1e17b146f874e8b7c512ef17ac2c552ca38fe854b5ace6b4ef08 Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.361302 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85cb846b98-bwgbn"] Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.731410 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" path="/var/lib/kubelet/pods/375bfff4-76af-4f71-a665-c409feeb6f67/volumes" Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.733076 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" path="/var/lib/kubelet/pods/4aea1771-69e2-4735-b813-9a8214a2227c/volumes" Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.733904 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" path="/var/lib/kubelet/pods/f42a19be-1d8d-45f5-a92e-95b3fc416db7/volumes" Feb 21 07:06:16 crc kubenswrapper[4820]: I0221 07:06:16.280354 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85cb846b98-bwgbn" event={"ID":"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe","Type":"ContainerStarted","Data":"2888304fe149a4652cef0ecaece438bfd7d58f18a6fbf5e65f2e3c959991183b"} Feb 21 07:06:16 crc kubenswrapper[4820]: I0221 07:06:16.280651 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85cb846b98-bwgbn" event={"ID":"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe","Type":"ContainerStarted","Data":"eafd72d9e7eb9455c63fe46ce3b813c939d82e75512da868bf318e1592ef0443"} Feb 21 07:06:16 crc kubenswrapper[4820]: I0221 07:06:16.280678 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85cb846b98-bwgbn" event={"ID":"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe","Type":"ContainerStarted","Data":"d03e1e7ca4ed1e17b146f874e8b7c512ef17ac2c552ca38fe854b5ace6b4ef08"} Feb 21 07:06:16 crc kubenswrapper[4820]: I0221 07:06:16.280729 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:16 crc kubenswrapper[4820]: I0221 07:06:16.280751 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:16 crc kubenswrapper[4820]: I0221 07:06:16.308109 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85cb846b98-bwgbn" podStartSLOduration=2.308085301 podStartE2EDuration="2.308085301s" podCreationTimestamp="2026-02-21 07:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:16.304113553 +0000 UTC m=+1151.337197751" watchObservedRunningTime="2026-02-21 07:06:16.308085301 +0000 UTC m=+1151.341169499" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.224679 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8678d9479b-vqsct" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:41894->10.217.0.162:9311: read: connection reset by peer" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.224722 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8678d9479b-vqsct" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:41896->10.217.0.162:9311: read: connection reset by peer" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.290660 4820 generic.go:334] "Generic (PLEG): container finished" podID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerID="eefd2f11abd4007503e1948f668682aa77a956178ce8940a23b4f0cd82e4017d" exitCode=0 Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.291683 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8678d9479b-vqsct" event={"ID":"1d2a71d7-f0a3-47e2-9594-303d2240043a","Type":"ContainerDied","Data":"eefd2f11abd4007503e1948f668682aa77a956178ce8940a23b4f0cd82e4017d"} Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.657381 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.756190 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data-custom\") pod \"1d2a71d7-f0a3-47e2-9594-303d2240043a\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.756306 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bncf\" (UniqueName: \"kubernetes.io/projected/1d2a71d7-f0a3-47e2-9594-303d2240043a-kube-api-access-6bncf\") pod \"1d2a71d7-f0a3-47e2-9594-303d2240043a\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.756410 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2a71d7-f0a3-47e2-9594-303d2240043a-logs\") pod \"1d2a71d7-f0a3-47e2-9594-303d2240043a\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.756467 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data\") pod \"1d2a71d7-f0a3-47e2-9594-303d2240043a\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.756493 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-combined-ca-bundle\") pod \"1d2a71d7-f0a3-47e2-9594-303d2240043a\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.762788 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d2a71d7-f0a3-47e2-9594-303d2240043a-logs" (OuterVolumeSpecName: "logs") pod "1d2a71d7-f0a3-47e2-9594-303d2240043a" (UID: "1d2a71d7-f0a3-47e2-9594-303d2240043a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.778114 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d2a71d7-f0a3-47e2-9594-303d2240043a-kube-api-access-6bncf" (OuterVolumeSpecName: "kube-api-access-6bncf") pod "1d2a71d7-f0a3-47e2-9594-303d2240043a" (UID: "1d2a71d7-f0a3-47e2-9594-303d2240043a"). InnerVolumeSpecName "kube-api-access-6bncf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.809557 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d2a71d7-f0a3-47e2-9594-303d2240043a" (UID: "1d2a71d7-f0a3-47e2-9594-303d2240043a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.813422 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1d2a71d7-f0a3-47e2-9594-303d2240043a" (UID: "1d2a71d7-f0a3-47e2-9594-303d2240043a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.858488 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.858737 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bncf\" (UniqueName: \"kubernetes.io/projected/1d2a71d7-f0a3-47e2-9594-303d2240043a-kube-api-access-6bncf\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.858751 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2a71d7-f0a3-47e2-9594-303d2240043a-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.858761 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.859168 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data" (OuterVolumeSpecName: "config-data") pod "1d2a71d7-f0a3-47e2-9594-303d2240043a" (UID: "1d2a71d7-f0a3-47e2-9594-303d2240043a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.960940 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.299292 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8678d9479b-vqsct" event={"ID":"1d2a71d7-f0a3-47e2-9594-303d2240043a","Type":"ContainerDied","Data":"5bed7530faf105f1f1bc8124a0e0b6da645917e74dd6cbd033eab92c51acc5f7"} Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.300533 4820 scope.go:117] "RemoveContainer" containerID="eefd2f11abd4007503e1948f668682aa77a956178ce8940a23b4f0cd82e4017d" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.300746 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.307793 4820 generic.go:334] "Generic (PLEG): container finished" podID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerID="9bdbaee398862dd842b50a8bb04bbb6638f1a2775f7a43c604f1191103118805" exitCode=0 Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.307836 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerDied","Data":"9bdbaee398862dd842b50a8bb04bbb6638f1a2775f7a43c604f1191103118805"} Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.344228 4820 scope.go:117] "RemoveContainer" containerID="9aecb2af3009bc608fea166750c2b4fb589074d7853e9934bcc3f142b21868ec" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.357754 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8678d9479b-vqsct"] Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.370835 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-8678d9479b-vqsct"] Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.650588 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.772529 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vr9w\" (UniqueName: \"kubernetes.io/projected/a3cce54d-5f2a-4e51-864d-03e55b50d698-kube-api-access-9vr9w\") pod \"a3cce54d-5f2a-4e51-864d-03e55b50d698\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.772623 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-combined-ca-bundle\") pod \"a3cce54d-5f2a-4e51-864d-03e55b50d698\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.772671 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-log-httpd\") pod \"a3cce54d-5f2a-4e51-864d-03e55b50d698\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.772710 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-config-data\") pod \"a3cce54d-5f2a-4e51-864d-03e55b50d698\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.772751 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-scripts\") pod \"a3cce54d-5f2a-4e51-864d-03e55b50d698\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.772772 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-run-httpd\") pod \"a3cce54d-5f2a-4e51-864d-03e55b50d698\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.772826 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-sg-core-conf-yaml\") pod \"a3cce54d-5f2a-4e51-864d-03e55b50d698\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.774020 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a3cce54d-5f2a-4e51-864d-03e55b50d698" (UID: "a3cce54d-5f2a-4e51-864d-03e55b50d698"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.781503 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3cce54d-5f2a-4e51-864d-03e55b50d698-kube-api-access-9vr9w" (OuterVolumeSpecName: "kube-api-access-9vr9w") pod "a3cce54d-5f2a-4e51-864d-03e55b50d698" (UID: "a3cce54d-5f2a-4e51-864d-03e55b50d698"). InnerVolumeSpecName "kube-api-access-9vr9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.781994 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a3cce54d-5f2a-4e51-864d-03e55b50d698" (UID: "a3cce54d-5f2a-4e51-864d-03e55b50d698"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.796520 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-scripts" (OuterVolumeSpecName: "scripts") pod "a3cce54d-5f2a-4e51-864d-03e55b50d698" (UID: "a3cce54d-5f2a-4e51-864d-03e55b50d698"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.842714 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a3cce54d-5f2a-4e51-864d-03e55b50d698" (UID: "a3cce54d-5f2a-4e51-864d-03e55b50d698"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.874943 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.874982 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.874994 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.875005 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.875020 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vr9w\" (UniqueName: \"kubernetes.io/projected/a3cce54d-5f2a-4e51-864d-03e55b50d698-kube-api-access-9vr9w\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.883382 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-config-data" (OuterVolumeSpecName: "config-data") pod "a3cce54d-5f2a-4e51-864d-03e55b50d698" (UID: "a3cce54d-5f2a-4e51-864d-03e55b50d698"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.901439 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3cce54d-5f2a-4e51-864d-03e55b50d698" (UID: "a3cce54d-5f2a-4e51-864d-03e55b50d698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.976772 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.976805 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.319790 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerDied","Data":"0f8176927ad01d0eb54f7e8ca55f1bbe340ac767367622047b311589a963df40"} Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.320121 4820 scope.go:117] "RemoveContainer" containerID="cb629ad0bbb7b9acd7f005e921ba221a260f2358550412aca3a0d13dac46f4b7" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.320264 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.339146 4820 scope.go:117] "RemoveContainer" containerID="a3ae24ef827f682ba1110ccd6e6f98b2ddca11c1d6ed5c47dbaeb182af499ae5" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.366959 4820 scope.go:117] "RemoveContainer" containerID="9bdbaee398862dd842b50a8bb04bbb6638f1a2775f7a43c604f1191103118805" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.383294 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.396594 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.405774 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:19 crc kubenswrapper[4820]: E0221 07:06:19.406208 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406225 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api" Feb 21 07:06:19 crc kubenswrapper[4820]: E0221 07:06:19.406264 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="sg-core" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406273 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="sg-core" Feb 21 07:06:19 crc kubenswrapper[4820]: E0221 07:06:19.406288 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="proxy-httpd" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406293 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="proxy-httpd" Feb 21 07:06:19 crc kubenswrapper[4820]: E0221 07:06:19.406308 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api-log" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406314 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api-log" Feb 21 07:06:19 crc kubenswrapper[4820]: E0221 07:06:19.406328 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="ceilometer-notification-agent" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406334 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="ceilometer-notification-agent" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406489 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="proxy-httpd" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406503 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="sg-core" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406519 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api-log" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406530 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406540 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="ceilometer-notification-agent" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.408465 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.410877 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.412106 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.420357 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.485923 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.486008 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfpp7\" (UniqueName: \"kubernetes.io/projected/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-kube-api-access-wfpp7\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.486088 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-config-data\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.486158 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-scripts\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.486226 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-run-httpd\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.486312 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.486335 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-log-httpd\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.588375 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfpp7\" (UniqueName: \"kubernetes.io/projected/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-kube-api-access-wfpp7\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.588478 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-config-data\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.588532 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-scripts\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.588557 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-run-httpd\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.588586 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.588601 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-log-httpd\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.588633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.589314 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-run-httpd\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.589564 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-log-httpd\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.593063 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.593492 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-scripts\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.594310 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-config-data\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.597541 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.609002 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfpp7\" (UniqueName: \"kubernetes.io/projected/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-kube-api-access-wfpp7\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.719517 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" path="/var/lib/kubelet/pods/1d2a71d7-f0a3-47e2-9594-303d2240043a/volumes" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.720112 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" path="/var/lib/kubelet/pods/a3cce54d-5f2a-4e51-864d-03e55b50d698/volumes" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.728372 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:20 crc kubenswrapper[4820]: I0221 07:06:20.184459 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:20 crc kubenswrapper[4820]: I0221 07:06:20.330026 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerStarted","Data":"391e3a8821e7a5d4d540410a63bf4ea889c64567ec635528d2b32100b2356ede"} Feb 21 07:06:21 crc kubenswrapper[4820]: I0221 07:06:21.377537 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerStarted","Data":"792041cdd1d49253730bea81a9e8b7c6b65cdd0b5a588d9dbbddee7a05d92e15"} Feb 21 07:06:21 crc kubenswrapper[4820]: I0221 07:06:21.382104 4820 generic.go:334] "Generic (PLEG): container finished" podID="b400c916-2ba9-4d7e-b9f5-6044605f279c" containerID="902a90534639057fe4891bc5ba6d70d20ddb57a4bac2175eb285eb30ef1ad8ea" exitCode=0 Feb 21 07:06:21 crc kubenswrapper[4820]: I0221 07:06:21.382147 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vfn4b" event={"ID":"b400c916-2ba9-4d7e-b9f5-6044605f279c","Type":"ContainerDied","Data":"902a90534639057fe4891bc5ba6d70d20ddb57a4bac2175eb285eb30ef1ad8ea"} Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.391939 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerStarted","Data":"e50fb2c2d5d2058a45ddf6cac5b63dce70dcdc05810f14b1050c0f42254a6e6a"} Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.392260 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerStarted","Data":"1bf9a5312dc663d5ff01578445253ee3d622d5c37d73b234651e285d5db084fb"} Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.814879 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.953520 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-combined-ca-bundle\") pod \"b400c916-2ba9-4d7e-b9f5-6044605f279c\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.953665 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-scripts\") pod \"b400c916-2ba9-4d7e-b9f5-6044605f279c\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.953730 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-config-data\") pod \"b400c916-2ba9-4d7e-b9f5-6044605f279c\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.954262 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2l4g\" (UniqueName: \"kubernetes.io/projected/b400c916-2ba9-4d7e-b9f5-6044605f279c-kube-api-access-d2l4g\") pod \"b400c916-2ba9-4d7e-b9f5-6044605f279c\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.954335 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b400c916-2ba9-4d7e-b9f5-6044605f279c-etc-machine-id\") pod \"b400c916-2ba9-4d7e-b9f5-6044605f279c\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.954359 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-db-sync-config-data\") pod \"b400c916-2ba9-4d7e-b9f5-6044605f279c\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.955396 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b400c916-2ba9-4d7e-b9f5-6044605f279c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b400c916-2ba9-4d7e-b9f5-6044605f279c" (UID: "b400c916-2ba9-4d7e-b9f5-6044605f279c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.960361 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b400c916-2ba9-4d7e-b9f5-6044605f279c-kube-api-access-d2l4g" (OuterVolumeSpecName: "kube-api-access-d2l4g") pod "b400c916-2ba9-4d7e-b9f5-6044605f279c" (UID: "b400c916-2ba9-4d7e-b9f5-6044605f279c"). InnerVolumeSpecName "kube-api-access-d2l4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.969306 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-scripts" (OuterVolumeSpecName: "scripts") pod "b400c916-2ba9-4d7e-b9f5-6044605f279c" (UID: "b400c916-2ba9-4d7e-b9f5-6044605f279c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.969366 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b400c916-2ba9-4d7e-b9f5-6044605f279c" (UID: "b400c916-2ba9-4d7e-b9f5-6044605f279c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.014230 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b400c916-2ba9-4d7e-b9f5-6044605f279c" (UID: "b400c916-2ba9-4d7e-b9f5-6044605f279c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.021380 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-config-data" (OuterVolumeSpecName: "config-data") pod "b400c916-2ba9-4d7e-b9f5-6044605f279c" (UID: "b400c916-2ba9-4d7e-b9f5-6044605f279c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.057262 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.057303 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.057316 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2l4g\" (UniqueName: \"kubernetes.io/projected/b400c916-2ba9-4d7e-b9f5-6044605f279c-kube-api-access-d2l4g\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.057330 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b400c916-2ba9-4d7e-b9f5-6044605f279c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.057341 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.057351 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.401390 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerStarted","Data":"0f4fef91c2862a4646b8bf634066a73a2e52c555f67d14352b7e17152204700f"} Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.401543 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.403399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vfn4b" event={"ID":"b400c916-2ba9-4d7e-b9f5-6044605f279c","Type":"ContainerDied","Data":"8825622824e2c5a26d801793ab024244254cf79018cd4389ed87a92f9a749c24"} Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.403430 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8825622824e2c5a26d801793ab024244254cf79018cd4389ed87a92f9a749c24" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.403487 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.432212 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.552158224 podStartE2EDuration="4.432194219s" podCreationTimestamp="2026-02-21 07:06:19 +0000 UTC" firstStartedPulling="2026-02-21 07:06:20.187669351 +0000 UTC m=+1155.220753549" lastFinishedPulling="2026-02-21 07:06:23.067705346 +0000 UTC m=+1158.100789544" observedRunningTime="2026-02-21 07:06:23.425543627 +0000 UTC m=+1158.458627845" watchObservedRunningTime="2026-02-21 07:06:23.432194219 +0000 UTC m=+1158.465278417" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.705729 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:23 crc kubenswrapper[4820]: E0221 07:06:23.706338 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b400c916-2ba9-4d7e-b9f5-6044605f279c" containerName="cinder-db-sync" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.706359 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b400c916-2ba9-4d7e-b9f5-6044605f279c" containerName="cinder-db-sync" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.706603 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b400c916-2ba9-4d7e-b9f5-6044605f279c" containerName="cinder-db-sync" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.707675 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.710457 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.710902 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mmvl6" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.711178 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.711340 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.713286 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.767590 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-n4pc2"] Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.767913 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-scripts\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.767952 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.767990 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.768015 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d7b1660-2001-4122-9369-97c629938e58-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.768038 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.768075 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f7p4\" (UniqueName: \"kubernetes.io/projected/8d7b1660-2001-4122-9369-97c629938e58-kube-api-access-8f7p4\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.775296 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.789140 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-n4pc2"] Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869374 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869426 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d7b1660-2001-4122-9369-97c629938e58-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869470 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-svc\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869494 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869516 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-sb\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869550 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f7p4\" (UniqueName: \"kubernetes.io/projected/8d7b1660-2001-4122-9369-97c629938e58-kube-api-access-8f7p4\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869581 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-config\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869644 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-swift-storage-0\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869670 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-nb\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869699 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-scripts\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869721 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869741 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp4lj\" (UniqueName: \"kubernetes.io/projected/68596d31-1da0-47aa-9330-179af16beee5-kube-api-access-mp4lj\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.870816 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d7b1660-2001-4122-9369-97c629938e58-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.873762 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.874850 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.875959 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-scripts\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.879740 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.886091 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f7p4\" (UniqueName: \"kubernetes.io/projected/8d7b1660-2001-4122-9369-97c629938e58-kube-api-access-8f7p4\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.974576 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-config\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.974733 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-swift-storage-0\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.974767 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-nb\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.974822 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp4lj\" (UniqueName: \"kubernetes.io/projected/68596d31-1da0-47aa-9330-179af16beee5-kube-api-access-mp4lj\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.974938 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-svc\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.975000 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-sb\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.977431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-swift-storage-0\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.977483 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-svc\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.980084 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-sb\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.980333 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-config\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.980657 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-nb\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.981361 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.999668 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.001850 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp4lj\" (UniqueName: \"kubernetes.io/projected/68596d31-1da0-47aa-9330-179af16beee5-kube-api-access-mp4lj\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.002203 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.005076 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.029570 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.076771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.076824 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data-custom\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.076857 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-scripts\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.076874 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-logs\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.076982 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.077036 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6w49\" (UniqueName: \"kubernetes.io/projected/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-kube-api-access-z6w49\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.077110 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.092124 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.178443 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.178508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data-custom\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.178555 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-scripts\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.178580 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-logs\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.178629 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.178667 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6w49\" (UniqueName: \"kubernetes.io/projected/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-kube-api-access-z6w49\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.178738 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.179226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-logs\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.181312 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.188827 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data-custom\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.190076 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.191752 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.192511 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-scripts\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.208149 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6w49\" (UniqueName: \"kubernetes.io/projected/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-kube-api-access-z6w49\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.324360 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.392450 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.566286 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.676478 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-n4pc2"] Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.701355 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85dd5db455-fl7mt"] Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.702395 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85dd5db455-fl7mt" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-api" containerID="cri-o://302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0" gracePeriod=30 Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.703161 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85dd5db455-fl7mt" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-httpd" containerID="cri-o://a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47" gracePeriod=30 Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.746682 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7796b97765-sqvtc"] Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.750432 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.763647 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7796b97765-sqvtc"] Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.826651 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-85dd5db455-fl7mt" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": read tcp 10.217.0.2:41850->10.217.0.155:9696: read: connection reset by peer" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.900686 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-public-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.900762 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-combined-ca-bundle\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.900797 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.900845 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-internal-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.900882 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-httpd-config\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.900928 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-config\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.900964 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktrzc\" (UniqueName: \"kubernetes.io/projected/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-kube-api-access-ktrzc\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.002651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-public-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.002740 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-combined-ca-bundle\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.002778 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.002824 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-internal-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.002869 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-httpd-config\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.002924 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-config\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.002992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktrzc\" (UniqueName: \"kubernetes.io/projected/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-kube-api-access-ktrzc\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.009589 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-httpd-config\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.010102 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.010309 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-combined-ca-bundle\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.010485 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-public-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.012922 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-config\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.012945 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-internal-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.028672 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktrzc\" (UniqueName: \"kubernetes.io/projected/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-kube-api-access-ktrzc\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.036477 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.073510 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.424898 4820 generic.go:334] "Generic (PLEG): container finished" podID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerID="a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47" exitCode=0 Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.424977 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dd5db455-fl7mt" event={"ID":"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47","Type":"ContainerDied","Data":"a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47"} Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.430219 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea","Type":"ContainerStarted","Data":"6e502663719ec0c0a0a84d0c96dd6393160aec2507fa225d1ef3ff9eecb2291e"} Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.432842 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d7b1660-2001-4122-9369-97c629938e58","Type":"ContainerStarted","Data":"ff64435a47c0297e2732c2e77200493a270636c1ffcd894d5737019411ddb58e"} Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.434581 4820 generic.go:334] "Generic (PLEG): container finished" podID="68596d31-1da0-47aa-9330-179af16beee5" containerID="aab33edaeb25dccd647f693bcaba1307465b538dbe3fc05e9d81c6d78bcc4858" exitCode=0 Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.434670 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" event={"ID":"68596d31-1da0-47aa-9330-179af16beee5","Type":"ContainerDied","Data":"aab33edaeb25dccd647f693bcaba1307465b538dbe3fc05e9d81c6d78bcc4858"} Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.434716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" event={"ID":"68596d31-1da0-47aa-9330-179af16beee5","Type":"ContainerStarted","Data":"28ad0df7b26bbd0219980c2f8c1104679c4b4d8454ba1005ca678ce2d979fa35"} Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.714553 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7796b97765-sqvtc"] Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.135053 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.453345 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" event={"ID":"68596d31-1da0-47aa-9330-179af16beee5","Type":"ContainerStarted","Data":"f652bc5f84c383e4df28b7028766cbc0147be5d396eb0aeb52cbd94dbc2ad6ed"} Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.454379 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.459628 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7796b97765-sqvtc" event={"ID":"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d","Type":"ContainerStarted","Data":"89a677ab22f4bcd7551d19abb1edd151c1367901214a3d624d55bc1c5a3aa903"} Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.459665 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7796b97765-sqvtc" event={"ID":"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d","Type":"ContainerStarted","Data":"cbde025c9fa7d22d168b54e6b8a411d4937140bd66d43a2f8ef9982aa91aa117"} Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.459677 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7796b97765-sqvtc" event={"ID":"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d","Type":"ContainerStarted","Data":"11c093e11abcb295098b0a4ebd02622476fcadbf35b1cbecc53f2deb5b20c639"} Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.459919 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.461045 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea","Type":"ContainerStarted","Data":"deed6997de2910aa36eecf00be5828573d3d0c776bceabb91c3a2cff966a2293"} Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.475888 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" podStartSLOduration=3.475871343 podStartE2EDuration="3.475871343s" podCreationTimestamp="2026-02-21 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:26.46847921 +0000 UTC m=+1161.501563408" watchObservedRunningTime="2026-02-21 07:06:26.475871343 +0000 UTC m=+1161.508955541" Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.499386 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7796b97765-sqvtc" podStartSLOduration=2.499366085 podStartE2EDuration="2.499366085s" podCreationTimestamp="2026-02-21 07:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:26.487804268 +0000 UTC m=+1161.520888476" watchObservedRunningTime="2026-02-21 07:06:26.499366085 +0000 UTC m=+1161.532450283" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.156941 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.254352 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-ovndb-tls-certs\") pod \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.254749 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-internal-tls-certs\") pod \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.254782 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-combined-ca-bundle\") pod \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.254872 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-config\") pod \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.254913 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-public-tls-certs\") pod \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.254954 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-httpd-config\") pod \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.254981 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwzcm\" (UniqueName: \"kubernetes.io/projected/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-kube-api-access-hwzcm\") pod \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.272484 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" (UID: "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.273420 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-kube-api-access-hwzcm" (OuterVolumeSpecName: "kube-api-access-hwzcm") pod "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" (UID: "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47"). InnerVolumeSpecName "kube-api-access-hwzcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.359407 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.359445 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwzcm\" (UniqueName: \"kubernetes.io/projected/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-kube-api-access-hwzcm\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.371909 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-config" (OuterVolumeSpecName: "config") pod "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" (UID: "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.384014 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" (UID: "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.389952 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" (UID: "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.404139 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" (UID: "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.452307 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" (UID: "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.461482 4820 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.461520 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.461529 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.461546 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.461555 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.474754 4820 generic.go:334] "Generic (PLEG): container finished" podID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerID="302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0" exitCode=0 Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.474817 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.474831 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dd5db455-fl7mt" event={"ID":"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47","Type":"ContainerDied","Data":"302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0"} Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.474944 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dd5db455-fl7mt" event={"ID":"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47","Type":"ContainerDied","Data":"73fe748c020d9cdb0f7411013cf334c00e8fbd8633affe05f3bd15d54091bf15"} Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.474995 4820 scope.go:117] "RemoveContainer" containerID="a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.476680 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea","Type":"ContainerStarted","Data":"29982dd2117675bce3167f6c416f998d7a5bbffccfe65b29c81ed9279185df74"} Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.476792 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api-log" containerID="cri-o://deed6997de2910aa36eecf00be5828573d3d0c776bceabb91c3a2cff966a2293" gracePeriod=30 Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.476865 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.476896 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api" containerID="cri-o://29982dd2117675bce3167f6c416f998d7a5bbffccfe65b29c81ed9279185df74" gracePeriod=30 Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.487088 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d7b1660-2001-4122-9369-97c629938e58","Type":"ContainerStarted","Data":"1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77"} Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.487125 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d7b1660-2001-4122-9369-97c629938e58","Type":"ContainerStarted","Data":"1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef"} Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.503925 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.503905665 podStartE2EDuration="4.503905665s" podCreationTimestamp="2026-02-21 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:27.497020077 +0000 UTC m=+1162.530104275" watchObservedRunningTime="2026-02-21 07:06:27.503905665 +0000 UTC m=+1162.536989873" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.527311 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.538099333 podStartE2EDuration="4.527292234s" podCreationTimestamp="2026-02-21 07:06:23 +0000 UTC" firstStartedPulling="2026-02-21 07:06:24.608005707 +0000 UTC m=+1159.641089905" lastFinishedPulling="2026-02-21 07:06:25.597198608 +0000 UTC m=+1160.630282806" observedRunningTime="2026-02-21 07:06:27.520277782 +0000 UTC m=+1162.553362000" watchObservedRunningTime="2026-02-21 07:06:27.527292234 +0000 UTC m=+1162.560376432" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.557369 4820 scope.go:117] "RemoveContainer" containerID="302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.578635 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85dd5db455-fl7mt"] Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.581038 4820 scope.go:117] "RemoveContainer" containerID="a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47" Feb 21 07:06:27 crc kubenswrapper[4820]: E0221 07:06:27.584912 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47\": container with ID starting with a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47 not found: ID does not exist" containerID="a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.584965 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47"} err="failed to get container status \"a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47\": rpc error: code = NotFound desc = could not find container \"a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47\": container with ID starting with a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47 not found: ID does not exist" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.584999 4820 scope.go:117] "RemoveContainer" containerID="302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0" Feb 21 07:06:27 crc kubenswrapper[4820]: E0221 07:06:27.585333 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0\": container with ID starting with 302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0 not found: ID does not exist" containerID="302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.585358 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0"} err="failed to get container status \"302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0\": rpc error: code = NotFound desc = could not find container \"302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0\": container with ID starting with 302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0 not found: ID does not exist" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.590371 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-85dd5db455-fl7mt"] Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.711040 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" path="/var/lib/kubelet/pods/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47/volumes" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.504068 4820 generic.go:334] "Generic (PLEG): container finished" podID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerID="29982dd2117675bce3167f6c416f998d7a5bbffccfe65b29c81ed9279185df74" exitCode=0 Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.504445 4820 generic.go:334] "Generic (PLEG): container finished" podID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerID="deed6997de2910aa36eecf00be5828573d3d0c776bceabb91c3a2cff966a2293" exitCode=143 Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.504398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea","Type":"ContainerDied","Data":"29982dd2117675bce3167f6c416f998d7a5bbffccfe65b29c81ed9279185df74"} Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.504972 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea","Type":"ContainerDied","Data":"deed6997de2910aa36eecf00be5828573d3d0c776bceabb91c3a2cff966a2293"} Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.597573 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.632942 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-scripts\") pod \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633007 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-etc-machine-id\") pod \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633034 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6w49\" (UniqueName: \"kubernetes.io/projected/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-kube-api-access-z6w49\") pod \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633100 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data-custom\") pod \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633139 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-combined-ca-bundle\") pod \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633140 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" (UID: "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633200 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data\") pod \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633227 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-logs\") pod \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633638 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.634318 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-logs" (OuterVolumeSpecName: "logs") pod "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" (UID: "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.642795 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" (UID: "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.642867 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-scripts" (OuterVolumeSpecName: "scripts") pod "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" (UID: "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.642855 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-kube-api-access-z6w49" (OuterVolumeSpecName: "kube-api-access-z6w49") pod "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" (UID: "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea"). InnerVolumeSpecName "kube-api-access-z6w49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.668880 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" (UID: "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.708389 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data" (OuterVolumeSpecName: "config-data") pod "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" (UID: "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.735888 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.735930 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.735941 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.735952 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.735963 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6w49\" (UniqueName: \"kubernetes.io/projected/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-kube-api-access-z6w49\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.735975 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.031292 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.515758 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea","Type":"ContainerDied","Data":"6e502663719ec0c0a0a84d0c96dd6393160aec2507fa225d1ef3ff9eecb2291e"} Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.515785 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.515818 4820 scope.go:117] "RemoveContainer" containerID="29982dd2117675bce3167f6c416f998d7a5bbffccfe65b29c81ed9279185df74" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.547964 4820 scope.go:117] "RemoveContainer" containerID="deed6997de2910aa36eecf00be5828573d3d0c776bceabb91c3a2cff966a2293" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.557620 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.576212 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.592405 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:29 crc kubenswrapper[4820]: E0221 07:06:29.592886 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api-log" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.592970 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api-log" Feb 21 07:06:29 crc kubenswrapper[4820]: E0221 07:06:29.593072 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-api" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.593135 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-api" Feb 21 07:06:29 crc kubenswrapper[4820]: E0221 07:06:29.593190 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-httpd" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.593265 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-httpd" Feb 21 07:06:29 crc kubenswrapper[4820]: E0221 07:06:29.593324 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.593390 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.593630 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-api" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.593700 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api-log" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.593759 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-httpd" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.593826 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.594871 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.604779 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.607002 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.607677 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.607920 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656509 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-public-tls-certs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656554 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899bd84b-c67f-4a89-9f92-a68094530566-logs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656590 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656615 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656633 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656662 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data-custom\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656686 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-scripts\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656716 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899bd84b-c67f-4a89-9f92-a68094530566-etc-machine-id\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656744 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rnp4\" (UniqueName: \"kubernetes.io/projected/899bd84b-c67f-4a89-9f92-a68094530566-kube-api-access-5rnp4\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.706905 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" path="/var/lib/kubelet/pods/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea/volumes" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758017 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899bd84b-c67f-4a89-9f92-a68094530566-logs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758258 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-public-tls-certs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758362 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758442 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758540 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758636 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data-custom\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758682 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899bd84b-c67f-4a89-9f92-a68094530566-logs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758716 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-scripts\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.759061 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899bd84b-c67f-4a89-9f92-a68094530566-etc-machine-id\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.759168 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rnp4\" (UniqueName: \"kubernetes.io/projected/899bd84b-c67f-4a89-9f92-a68094530566-kube-api-access-5rnp4\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.759908 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899bd84b-c67f-4a89-9f92-a68094530566-etc-machine-id\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.765206 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.767133 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-scripts\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.767806 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.775229 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.779900 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data-custom\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.782853 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-public-tls-certs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.811174 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rnp4\" (UniqueName: \"kubernetes.io/projected/899bd84b-c67f-4a89-9f92-a68094530566-kube-api-access-5rnp4\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.933748 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:06:30 crc kubenswrapper[4820]: W0221 07:06:30.420012 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod899bd84b_c67f_4a89_9f92_a68094530566.slice/crio-5cb1b96062485be8b82f57585bda85bcd24b219427b4dff91edc9fb75a52f886 WatchSource:0}: Error finding container 5cb1b96062485be8b82f57585bda85bcd24b219427b4dff91edc9fb75a52f886: Status 404 returned error can't find the container with id 5cb1b96062485be8b82f57585bda85bcd24b219427b4dff91edc9fb75a52f886 Feb 21 07:06:30 crc kubenswrapper[4820]: I0221 07:06:30.426382 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:30 crc kubenswrapper[4820]: I0221 07:06:30.527787 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899bd84b-c67f-4a89-9f92-a68094530566","Type":"ContainerStarted","Data":"5cb1b96062485be8b82f57585bda85bcd24b219427b4dff91edc9fb75a52f886"} Feb 21 07:06:31 crc kubenswrapper[4820]: I0221 07:06:31.544620 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899bd84b-c67f-4a89-9f92-a68094530566","Type":"ContainerStarted","Data":"9d5edce8d453916f71c03d27dbadd27156155685e8222590f97716c227514067"} Feb 21 07:06:31 crc kubenswrapper[4820]: I0221 07:06:31.640499 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:06:32 crc kubenswrapper[4820]: I0221 07:06:32.555657 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899bd84b-c67f-4a89-9f92-a68094530566","Type":"ContainerStarted","Data":"765217377e07f3bfb154c1825d8e9aa8ce15d008d63d260388c182a058e66b3c"} Feb 21 07:06:32 crc kubenswrapper[4820]: I0221 07:06:32.557327 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 21 07:06:32 crc kubenswrapper[4820]: I0221 07:06:32.586638 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.586618158 podStartE2EDuration="3.586618158s" podCreationTimestamp="2026-02-21 07:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:32.572334608 +0000 UTC m=+1167.605418846" watchObservedRunningTime="2026-02-21 07:06:32.586618158 +0000 UTC m=+1167.619702356" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.069083 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.070697 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.082174 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.082267 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.082397 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-ntd4f" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.091356 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.111484 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.149508 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.149562 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x8xc\" (UniqueName: \"kubernetes.io/projected/d7d6374d-1595-4586-b161-d199a2b39068-kube-api-access-7x8xc\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.149591 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config-secret\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.149732 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.171549 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-2n9gl"] Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.171769 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" podUID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerName="dnsmasq-dns" containerID="cri-o://d21e5362f3bdef1222d983791df13fcb26aee43c220da6058c8541e05112d6b5" gracePeriod=10 Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.251552 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.251619 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x8xc\" (UniqueName: \"kubernetes.io/projected/d7d6374d-1595-4586-b161-d199a2b39068-kube-api-access-7x8xc\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.251658 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config-secret\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.251787 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.252867 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.259704 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config-secret\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.259972 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.271153 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x8xc\" (UniqueName: \"kubernetes.io/projected/d7d6374d-1595-4586-b161-d199a2b39068-kube-api-access-7x8xc\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.329118 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.376510 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.417882 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.583385 4820 generic.go:334] "Generic (PLEG): container finished" podID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerID="d21e5362f3bdef1222d983791df13fcb26aee43c220da6058c8541e05112d6b5" exitCode=0 Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.583433 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" event={"ID":"29aae534-5c23-4125-a6c1-57b4bd7a2a4c","Type":"ContainerDied","Data":"d21e5362f3bdef1222d983791df13fcb26aee43c220da6058c8541e05112d6b5"} Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.583566 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="cinder-scheduler" containerID="cri-o://1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef" gracePeriod=30 Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.584338 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="probe" containerID="cri-o://1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77" gracePeriod=30 Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.661794 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.771464 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-svc\") pod \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.771823 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-config\") pod \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.771927 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-nb\") pod \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.771964 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-sb\") pod \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.772017 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chvst\" (UniqueName: \"kubernetes.io/projected/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-kube-api-access-chvst\") pod \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.772164 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-swift-storage-0\") pod \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.777587 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-kube-api-access-chvst" (OuterVolumeSpecName: "kube-api-access-chvst") pod "29aae534-5c23-4125-a6c1-57b4bd7a2a4c" (UID: "29aae534-5c23-4125-a6c1-57b4bd7a2a4c"). InnerVolumeSpecName "kube-api-access-chvst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.831679 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "29aae534-5c23-4125-a6c1-57b4bd7a2a4c" (UID: "29aae534-5c23-4125-a6c1-57b4bd7a2a4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.837148 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "29aae534-5c23-4125-a6c1-57b4bd7a2a4c" (UID: "29aae534-5c23-4125-a6c1-57b4bd7a2a4c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.837968 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29aae534-5c23-4125-a6c1-57b4bd7a2a4c" (UID: "29aae534-5c23-4125-a6c1-57b4bd7a2a4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.862060 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "29aae534-5c23-4125-a6c1-57b4bd7a2a4c" (UID: "29aae534-5c23-4125-a6c1-57b4bd7a2a4c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.867915 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-config" (OuterVolumeSpecName: "config") pod "29aae534-5c23-4125-a6c1-57b4bd7a2a4c" (UID: "29aae534-5c23-4125-a6c1-57b4bd7a2a4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.875097 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.875150 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.875165 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chvst\" (UniqueName: \"kubernetes.io/projected/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-kube-api-access-chvst\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.875185 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.875203 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.875216 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:34 crc kubenswrapper[4820]: W0221 07:06:34.982422 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7d6374d_1595_4586_b161_d199a2b39068.slice/crio-34c194c9fe818035a6ace9db4a70ccfc491278d779d52eb481f0eb40fdc2f9cb WatchSource:0}: Error finding container 34c194c9fe818035a6ace9db4a70ccfc491278d779d52eb481f0eb40fdc2f9cb: Status 404 returned error can't find the container with id 34c194c9fe818035a6ace9db4a70ccfc491278d779d52eb481f0eb40fdc2f9cb Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.986612 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.596870 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.596869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" event={"ID":"29aae534-5c23-4125-a6c1-57b4bd7a2a4c","Type":"ContainerDied","Data":"f12c1a8e0db096347f19d2697b9e9331aac42f90a3217a3038a39188f188a441"} Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.598087 4820 scope.go:117] "RemoveContainer" containerID="d21e5362f3bdef1222d983791df13fcb26aee43c220da6058c8541e05112d6b5" Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.598509 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d7d6374d-1595-4586-b161-d199a2b39068","Type":"ContainerStarted","Data":"34c194c9fe818035a6ace9db4a70ccfc491278d779d52eb481f0eb40fdc2f9cb"} Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.606790 4820 generic.go:334] "Generic (PLEG): container finished" podID="8d7b1660-2001-4122-9369-97c629938e58" containerID="1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77" exitCode=0 Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.606833 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d7b1660-2001-4122-9369-97c629938e58","Type":"ContainerDied","Data":"1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77"} Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.621846 4820 scope.go:117] "RemoveContainer" containerID="6e603615eb6f8aebb5fc0a7934eddaf580b840ae971a07039f0c0c6049a9ef38" Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.638162 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-2n9gl"] Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.648556 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-2n9gl"] Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.716396 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" path="/var/lib/kubelet/pods/29aae534-5c23-4125-a6c1-57b4bd7a2a4c/volumes" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.405736 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-cffb45b79-w6bp8"] Feb 21 07:06:37 crc kubenswrapper[4820]: E0221 07:06:37.409617 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerName="dnsmasq-dns" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.409650 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerName="dnsmasq-dns" Feb 21 07:06:37 crc kubenswrapper[4820]: E0221 07:06:37.409674 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerName="init" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.409684 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerName="init" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.409954 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerName="dnsmasq-dns" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.411081 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.414375 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.414539 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.416922 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.425746 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-cffb45b79-w6bp8"] Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521145 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-public-tls-certs\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521213 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-run-httpd\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521611 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-log-httpd\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521698 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-etc-swift\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521784 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-config-data\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521824 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-combined-ca-bundle\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521851 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-internal-tls-certs\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521997 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmtfw\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-kube-api-access-gmtfw\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623278 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-run-httpd\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623419 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-log-httpd\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623458 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-etc-swift\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623500 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-config-data\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-combined-ca-bundle\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623550 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-internal-tls-certs\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623623 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmtfw\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-kube-api-access-gmtfw\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-public-tls-certs\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.624046 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-run-httpd\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.624211 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-log-httpd\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.630439 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-config-data\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.631941 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-public-tls-certs\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.632918 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-internal-tls-certs\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.637631 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-etc-swift\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.639852 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-combined-ca-bundle\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.641122 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmtfw\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-kube-api-access-gmtfw\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.774146 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.338731 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-cffb45b79-w6bp8"] Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.353851 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499181 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data-custom\") pod \"8d7b1660-2001-4122-9369-97c629938e58\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499267 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data\") pod \"8d7b1660-2001-4122-9369-97c629938e58\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499411 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-combined-ca-bundle\") pod \"8d7b1660-2001-4122-9369-97c629938e58\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499527 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d7b1660-2001-4122-9369-97c629938e58-etc-machine-id\") pod \"8d7b1660-2001-4122-9369-97c629938e58\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499549 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f7p4\" (UniqueName: \"kubernetes.io/projected/8d7b1660-2001-4122-9369-97c629938e58-kube-api-access-8f7p4\") pod \"8d7b1660-2001-4122-9369-97c629938e58\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499594 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-scripts\") pod \"8d7b1660-2001-4122-9369-97c629938e58\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499656 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d7b1660-2001-4122-9369-97c629938e58-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8d7b1660-2001-4122-9369-97c629938e58" (UID: "8d7b1660-2001-4122-9369-97c629938e58"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499958 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d7b1660-2001-4122-9369-97c629938e58-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.504829 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8d7b1660-2001-4122-9369-97c629938e58" (UID: "8d7b1660-2001-4122-9369-97c629938e58"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.505464 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7b1660-2001-4122-9369-97c629938e58-kube-api-access-8f7p4" (OuterVolumeSpecName: "kube-api-access-8f7p4") pod "8d7b1660-2001-4122-9369-97c629938e58" (UID: "8d7b1660-2001-4122-9369-97c629938e58"). InnerVolumeSpecName "kube-api-access-8f7p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.511136 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-scripts" (OuterVolumeSpecName: "scripts") pod "8d7b1660-2001-4122-9369-97c629938e58" (UID: "8d7b1660-2001-4122-9369-97c629938e58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.531671 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.531931 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-central-agent" containerID="cri-o://792041cdd1d49253730bea81a9e8b7c6b65cdd0b5a588d9dbbddee7a05d92e15" gracePeriod=30 Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.532069 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="proxy-httpd" containerID="cri-o://0f4fef91c2862a4646b8bf634066a73a2e52c555f67d14352b7e17152204700f" gracePeriod=30 Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.532113 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="sg-core" containerID="cri-o://e50fb2c2d5d2058a45ddf6cac5b63dce70dcdc05810f14b1050c0f42254a6e6a" gracePeriod=30 Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.532147 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-notification-agent" containerID="cri-o://1bf9a5312dc663d5ff01578445253ee3d622d5c37d73b234651e285d5db084fb" gracePeriod=30 Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.540349 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.602412 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f7p4\" (UniqueName: \"kubernetes.io/projected/8d7b1660-2001-4122-9369-97c629938e58-kube-api-access-8f7p4\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.602440 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.602449 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.689138 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cffb45b79-w6bp8" event={"ID":"9235cff6-e0e8-471a-9377-26dfcfd84dac","Type":"ContainerStarted","Data":"799aa64333911f7111f98ffff76ee1c66aebdf83eeaa6dc6c45e5389c74e915a"} Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.690702 4820 generic.go:334] "Generic (PLEG): container finished" podID="8d7b1660-2001-4122-9369-97c629938e58" containerID="1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef" exitCode=0 Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.690738 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d7b1660-2001-4122-9369-97c629938e58","Type":"ContainerDied","Data":"1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef"} Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.690755 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d7b1660-2001-4122-9369-97c629938e58","Type":"ContainerDied","Data":"ff64435a47c0297e2732c2e77200493a270636c1ffcd894d5737019411ddb58e"} Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.690772 4820 scope.go:117] "RemoveContainer" containerID="1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.690884 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.708356 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d7b1660-2001-4122-9369-97c629938e58" (UID: "8d7b1660-2001-4122-9369-97c629938e58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.745357 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data" (OuterVolumeSpecName: "config-data") pod "8d7b1660-2001-4122-9369-97c629938e58" (UID: "8d7b1660-2001-4122-9369-97c629938e58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.805930 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.805959 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.829517 4820 scope.go:117] "RemoveContainer" containerID="1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.862882 4820 scope.go:117] "RemoveContainer" containerID="1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77" Feb 21 07:06:38 crc kubenswrapper[4820]: E0221 07:06:38.863511 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77\": container with ID starting with 1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77 not found: ID does not exist" containerID="1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.863620 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77"} err="failed to get container status \"1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77\": rpc error: code = NotFound desc = could not find container \"1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77\": container with ID starting with 1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77 not found: ID does not exist" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.863656 4820 scope.go:117] "RemoveContainer" containerID="1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef" Feb 21 07:06:38 crc kubenswrapper[4820]: E0221 07:06:38.864251 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef\": container with ID starting with 1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef not found: ID does not exist" containerID="1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.864293 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef"} err="failed to get container status \"1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef\": rpc error: code = NotFound desc = could not find container \"1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef\": container with ID starting with 1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef not found: ID does not exist" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.031461 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.036900 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.058161 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:39 crc kubenswrapper[4820]: E0221 07:06:39.058591 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="probe" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.058608 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="probe" Feb 21 07:06:39 crc kubenswrapper[4820]: E0221 07:06:39.058620 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="cinder-scheduler" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.058630 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="cinder-scheduler" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.058798 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="cinder-scheduler" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.058815 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="probe" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.059753 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.061873 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.083674 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.110952 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkbbn\" (UniqueName: \"kubernetes.io/projected/e533e163-2ccc-4468-9083-c9bf711b0dfb-kube-api-access-vkbbn\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.111009 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.111045 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.111216 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.111302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e533e163-2ccc-4468-9083-c9bf711b0dfb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.111371 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.213193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.213267 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e533e163-2ccc-4468-9083-c9bf711b0dfb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.213309 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.213416 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkbbn\" (UniqueName: \"kubernetes.io/projected/e533e163-2ccc-4468-9083-c9bf711b0dfb-kube-api-access-vkbbn\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.213469 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.213509 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.213515 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e533e163-2ccc-4468-9083-c9bf711b0dfb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.218411 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.218417 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.220843 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.225825 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.233418 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkbbn\" (UniqueName: \"kubernetes.io/projected/e533e163-2ccc-4468-9083-c9bf711b0dfb-kube-api-access-vkbbn\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.392888 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.711139 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d7b1660-2001-4122-9369-97c629938e58" path="/var/lib/kubelet/pods/8d7b1660-2001-4122-9369-97c629938e58/volumes" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.720328 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cffb45b79-w6bp8" event={"ID":"9235cff6-e0e8-471a-9377-26dfcfd84dac","Type":"ContainerStarted","Data":"974657f758f342af6918d1323b07f9c2cdb0b997d3d6058cb1ab6f19ab1ef80b"} Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.720376 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cffb45b79-w6bp8" event={"ID":"9235cff6-e0e8-471a-9377-26dfcfd84dac","Type":"ContainerStarted","Data":"a7985c1e46addff2bf4510896c079d9be02b4a1acfa0993dfb445f66ebd5f38f"} Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.720514 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.720608 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737834 4820 generic.go:334] "Generic (PLEG): container finished" podID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerID="0f4fef91c2862a4646b8bf634066a73a2e52c555f67d14352b7e17152204700f" exitCode=0 Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737879 4820 generic.go:334] "Generic (PLEG): container finished" podID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerID="e50fb2c2d5d2058a45ddf6cac5b63dce70dcdc05810f14b1050c0f42254a6e6a" exitCode=2 Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737887 4820 generic.go:334] "Generic (PLEG): container finished" podID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerID="1bf9a5312dc663d5ff01578445253ee3d622d5c37d73b234651e285d5db084fb" exitCode=0 Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737894 4820 generic.go:334] "Generic (PLEG): container finished" podID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerID="792041cdd1d49253730bea81a9e8b7c6b65cdd0b5a588d9dbbddee7a05d92e15" exitCode=0 Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737916 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerDied","Data":"0f4fef91c2862a4646b8bf634066a73a2e52c555f67d14352b7e17152204700f"} Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737954 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerDied","Data":"e50fb2c2d5d2058a45ddf6cac5b63dce70dcdc05810f14b1050c0f42254a6e6a"} Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737966 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerDied","Data":"1bf9a5312dc663d5ff01578445253ee3d622d5c37d73b234651e285d5db084fb"} Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737975 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerDied","Data":"792041cdd1d49253730bea81a9e8b7c6b65cdd0b5a588d9dbbddee7a05d92e15"} Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.756616 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-cffb45b79-w6bp8" podStartSLOduration=2.756590799 podStartE2EDuration="2.756590799s" podCreationTimestamp="2026-02-21 07:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:39.739592954 +0000 UTC m=+1174.772677162" watchObservedRunningTime="2026-02-21 07:06:39.756590799 +0000 UTC m=+1174.789674997" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.853548 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:39 crc kubenswrapper[4820]: W0221 07:06:39.863960 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode533e163_2ccc_4468_9083_c9bf711b0dfb.slice/crio-26cd1076cc63a3c9ca70f42c100523437bd60b14673a32f0d582762b2e741f8a WatchSource:0}: Error finding container 26cd1076cc63a3c9ca70f42c100523437bd60b14673a32f0d582762b2e741f8a: Status 404 returned error can't find the container with id 26cd1076cc63a3c9ca70f42c100523437bd60b14673a32f0d582762b2e741f8a Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.961606 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.032157 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-log-httpd\") pod \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.032323 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-combined-ca-bundle\") pod \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.032368 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfpp7\" (UniqueName: \"kubernetes.io/projected/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-kube-api-access-wfpp7\") pod \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.032398 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-sg-core-conf-yaml\") pod \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.032492 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-run-httpd\") pod \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.032534 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-config-data\") pod \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.032581 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-scripts\") pod \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.033549 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" (UID: "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.034711 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" (UID: "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.038214 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-scripts" (OuterVolumeSpecName: "scripts") pod "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" (UID: "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.068541 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-kube-api-access-wfpp7" (OuterVolumeSpecName: "kube-api-access-wfpp7") pod "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" (UID: "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c"). InnerVolumeSpecName "kube-api-access-wfpp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.117393 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" (UID: "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.138107 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfpp7\" (UniqueName: \"kubernetes.io/projected/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-kube-api-access-wfpp7\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.138147 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.138159 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.138169 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.138180 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.182570 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" (UID: "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.212416 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-config-data" (OuterVolumeSpecName: "config-data") pod "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" (UID: "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.245618 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.245661 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.747636 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e533e163-2ccc-4468-9083-c9bf711b0dfb","Type":"ContainerStarted","Data":"26cd1076cc63a3c9ca70f42c100523437bd60b14673a32f0d582762b2e741f8a"} Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.761516 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerDied","Data":"391e3a8821e7a5d4d540410a63bf4ea889c64567ec635528d2b32100b2356ede"} Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.761607 4820 scope.go:117] "RemoveContainer" containerID="0f4fef91c2862a4646b8bf634066a73a2e52c555f67d14352b7e17152204700f" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.761549 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.782376 4820 scope.go:117] "RemoveContainer" containerID="e50fb2c2d5d2058a45ddf6cac5b63dce70dcdc05810f14b1050c0f42254a6e6a" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.810968 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.820314 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.831524 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:40 crc kubenswrapper[4820]: E0221 07:06:40.832110 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-notification-agent" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832143 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-notification-agent" Feb 21 07:06:40 crc kubenswrapper[4820]: E0221 07:06:40.832157 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-central-agent" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832164 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-central-agent" Feb 21 07:06:40 crc kubenswrapper[4820]: E0221 07:06:40.832185 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="sg-core" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832192 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="sg-core" Feb 21 07:06:40 crc kubenswrapper[4820]: E0221 07:06:40.832213 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="proxy-httpd" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832219 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="proxy-httpd" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832397 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-central-agent" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832410 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="proxy-httpd" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832423 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-notification-agent" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832434 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="sg-core" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.834139 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.837837 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.838790 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.838790 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.959739 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.959953 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsc2l\" (UniqueName: \"kubernetes.io/projected/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-kube-api-access-wsc2l\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.960028 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-config-data\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.960148 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-log-httpd\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.960403 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.960451 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-scripts\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.960499 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-run-httpd\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.062857 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-log-httpd\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.063113 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.063172 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-scripts\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.063229 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-run-httpd\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.063306 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.063466 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsc2l\" (UniqueName: \"kubernetes.io/projected/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-kube-api-access-wsc2l\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.063528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-config-data\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.064668 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-run-httpd\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.087440 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-log-httpd\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.087945 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.088506 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.088748 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-scripts\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.090398 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-config-data\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.095180 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsc2l\" (UniqueName: \"kubernetes.io/projected/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-kube-api-access-wsc2l\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.155357 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.710442 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" path="/var/lib/kubelet/pods/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c/volumes" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.773741 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e533e163-2ccc-4468-9083-c9bf711b0dfb","Type":"ContainerStarted","Data":"3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677"} Feb 21 07:06:42 crc kubenswrapper[4820]: I0221 07:06:42.575375 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 21 07:06:46 crc kubenswrapper[4820]: I0221 07:06:46.395880 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:46 crc kubenswrapper[4820]: I0221 07:06:46.404038 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:46 crc kubenswrapper[4820]: I0221 07:06:46.533941 4820 scope.go:117] "RemoveContainer" containerID="1bf9a5312dc663d5ff01578445253ee3d622d5c37d73b234651e285d5db084fb" Feb 21 07:06:46 crc kubenswrapper[4820]: I0221 07:06:46.612903 4820 scope.go:117] "RemoveContainer" containerID="792041cdd1d49253730bea81a9e8b7c6b65cdd0b5a588d9dbbddee7a05d92e15" Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.130819 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.783671 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.785691 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.852026 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d7d6374d-1595-4586-b161-d199a2b39068","Type":"ContainerStarted","Data":"909cf351ee5d3a426633b14e5a872b68e1e1f2b2e35b195ce445cb68523c8342"} Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.855358 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e533e163-2ccc-4468-9083-c9bf711b0dfb","Type":"ContainerStarted","Data":"275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074"} Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.857481 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerStarted","Data":"c6e4c61f560fdc36ef8818a932ad9b4e68979f45ec64327ab6006d30f510ba75"} Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.892350 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.220524709 podStartE2EDuration="13.892334051s" podCreationTimestamp="2026-02-21 07:06:34 +0000 UTC" firstStartedPulling="2026-02-21 07:06:34.985518815 +0000 UTC m=+1170.018603013" lastFinishedPulling="2026-02-21 07:06:46.657328157 +0000 UTC m=+1181.690412355" observedRunningTime="2026-02-21 07:06:47.880468297 +0000 UTC m=+1182.913552495" watchObservedRunningTime="2026-02-21 07:06:47.892334051 +0000 UTC m=+1182.925418249" Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.915015 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=8.91499735 podStartE2EDuration="8.91499735s" podCreationTimestamp="2026-02-21 07:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:47.909719995 +0000 UTC m=+1182.942804193" watchObservedRunningTime="2026-02-21 07:06:47.91499735 +0000 UTC m=+1182.948081548" Feb 21 07:06:48 crc kubenswrapper[4820]: I0221 07:06:48.859478 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:48 crc kubenswrapper[4820]: I0221 07:06:48.868555 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerStarted","Data":"3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0"} Feb 21 07:06:48 crc kubenswrapper[4820]: I0221 07:06:48.868597 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerStarted","Data":"0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b"} Feb 21 07:06:49 crc kubenswrapper[4820]: I0221 07:06:49.393279 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 21 07:06:49 crc kubenswrapper[4820]: I0221 07:06:49.538882 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:06:49 crc kubenswrapper[4820]: I0221 07:06:49.539167 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-log" containerID="cri-o://3e1ff2dd763154f63b65dd4be9fe5f5bcd513f4150395e54156c56ea74a4fb48" gracePeriod=30 Feb 21 07:06:49 crc kubenswrapper[4820]: I0221 07:06:49.539283 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-httpd" containerID="cri-o://94f8cea32bfbe2dcb3dc478f2ac9ab5b9c23f557b5defcc5e3d635872a87fe5e" gracePeriod=30 Feb 21 07:06:49 crc kubenswrapper[4820]: I0221 07:06:49.672949 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 21 07:06:49 crc kubenswrapper[4820]: I0221 07:06:49.879070 4820 generic.go:334] "Generic (PLEG): container finished" podID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerID="3e1ff2dd763154f63b65dd4be9fe5f5bcd513f4150395e54156c56ea74a4fb48" exitCode=143 Feb 21 07:06:49 crc kubenswrapper[4820]: I0221 07:06:49.880484 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c400cc2-a2a1-4204-8300-2b2420ab825e","Type":"ContainerDied","Data":"3e1ff2dd763154f63b65dd4be9fe5f5bcd513f4150395e54156c56ea74a4fb48"} Feb 21 07:06:50 crc kubenswrapper[4820]: I0221 07:06:50.481222 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:06:50 crc kubenswrapper[4820]: I0221 07:06:50.481470 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-log" containerID="cri-o://f1065ea92f9064f45c2733a25acd9f61b2299b2724994ced2d00c91a6cdebca4" gracePeriod=30 Feb 21 07:06:50 crc kubenswrapper[4820]: I0221 07:06:50.481543 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-httpd" containerID="cri-o://f07c17454301badcf8ab4771e95e8220dd709e96e43e5e64fa93a0170de14464" gracePeriod=30 Feb 21 07:06:50 crc kubenswrapper[4820]: I0221 07:06:50.888331 4820 generic.go:334] "Generic (PLEG): container finished" podID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerID="f1065ea92f9064f45c2733a25acd9f61b2299b2724994ced2d00c91a6cdebca4" exitCode=143 Feb 21 07:06:50 crc kubenswrapper[4820]: I0221 07:06:50.888446 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f5a553e-c548-455a-83e2-87f8f71f3067","Type":"ContainerDied","Data":"f1065ea92f9064f45c2733a25acd9f61b2299b2724994ced2d00c91a6cdebca4"} Feb 21 07:06:50 crc kubenswrapper[4820]: I0221 07:06:50.891589 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerStarted","Data":"d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418"} Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.672402 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": read tcp 10.217.0.2:45738->10.217.0.151:9292: read: connection reset by peer" Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.672931 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": read tcp 10.217.0.2:45722->10.217.0.151:9292: read: connection reset by peer" Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.916792 4820 generic.go:334] "Generic (PLEG): container finished" podID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerID="94f8cea32bfbe2dcb3dc478f2ac9ab5b9c23f557b5defcc5e3d635872a87fe5e" exitCode=0 Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.916861 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c400cc2-a2a1-4204-8300-2b2420ab825e","Type":"ContainerDied","Data":"94f8cea32bfbe2dcb3dc478f2ac9ab5b9c23f557b5defcc5e3d635872a87fe5e"} Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.919811 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerStarted","Data":"4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf"} Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.919975 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-central-agent" containerID="cri-o://0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b" gracePeriod=30 Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.920071 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="proxy-httpd" containerID="cri-o://4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf" gracePeriod=30 Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.920131 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="sg-core" containerID="cri-o://d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418" gracePeriod=30 Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.920175 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-notification-agent" containerID="cri-o://3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0" gracePeriod=30 Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.920289 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.942555 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.298944974 podStartE2EDuration="12.942534217s" podCreationTimestamp="2026-02-21 07:06:40 +0000 UTC" firstStartedPulling="2026-02-21 07:06:47.141442016 +0000 UTC m=+1182.174526214" lastFinishedPulling="2026-02-21 07:06:51.785031259 +0000 UTC m=+1186.818115457" observedRunningTime="2026-02-21 07:06:52.940307166 +0000 UTC m=+1187.973391364" watchObservedRunningTime="2026-02-21 07:06:52.942534217 +0000 UTC m=+1187.975618415" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.273126 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-pjnhh"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.274434 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.288134 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pjnhh"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.348267 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-b68n2"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.353224 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.377131 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-b68n2"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.449260 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2snh\" (UniqueName: \"kubernetes.io/projected/1fa19e90-7854-4eb9-9b72-26c8d0739851-kube-api-access-t2snh\") pod \"nova-api-db-create-pjnhh\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.449366 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fa19e90-7854-4eb9-9b72-26c8d0739851-operator-scripts\") pod \"nova-api-db-create-pjnhh\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.453265 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vdzvw"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.454604 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.466148 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vdzvw"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.474325 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a80b-account-create-update-n9j8x"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.492034 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.493964 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.508917 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a80b-account-create-update-n9j8x"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.546440 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.550584 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fa19e90-7854-4eb9-9b72-26c8d0739851-operator-scripts\") pod \"nova-api-db-create-pjnhh\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.550708 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pncg\" (UniqueName: \"kubernetes.io/projected/e610e477-7d95-4af5-be48-f8a9acd81d6a-kube-api-access-4pncg\") pod \"nova-cell0-db-create-b68n2\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.550747 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2snh\" (UniqueName: \"kubernetes.io/projected/1fa19e90-7854-4eb9-9b72-26c8d0739851-kube-api-access-t2snh\") pod \"nova-api-db-create-pjnhh\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.550776 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e610e477-7d95-4af5-be48-f8a9acd81d6a-operator-scripts\") pod \"nova-cell0-db-create-b68n2\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.551810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fa19e90-7854-4eb9-9b72-26c8d0739851-operator-scripts\") pod \"nova-api-db-create-pjnhh\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.583179 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2snh\" (UniqueName: \"kubernetes.io/projected/1fa19e90-7854-4eb9-9b72-26c8d0739851-kube-api-access-t2snh\") pod \"nova-api-db-create-pjnhh\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.602410 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.652948 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-httpd-run\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653009 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-logs\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653066 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-scripts\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653113 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-combined-ca-bundle\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653181 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-config-data\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653323 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/5c400cc2-a2a1-4204-8300-2b2420ab825e-kube-api-access-6kv8z\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653351 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653433 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-public-tls-certs\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653684 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/324a15c6-a903-420b-8db4-4268008c83d1-operator-scripts\") pod \"nova-cell1-db-create-vdzvw\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653724 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e610e477-7d95-4af5-be48-f8a9acd81d6a-operator-scripts\") pod \"nova-cell0-db-create-b68n2\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653803 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pd8v\" (UniqueName: \"kubernetes.io/projected/324a15c6-a903-420b-8db4-4268008c83d1-kube-api-access-9pd8v\") pod \"nova-cell1-db-create-vdzvw\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27134bb-c9b2-42d4-bad5-81e7b05874e7-operator-scripts\") pod \"nova-api-a80b-account-create-update-n9j8x\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwkj6\" (UniqueName: \"kubernetes.io/projected/e27134bb-c9b2-42d4-bad5-81e7b05874e7-kube-api-access-mwkj6\") pod \"nova-api-a80b-account-create-update-n9j8x\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.654078 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pncg\" (UniqueName: \"kubernetes.io/projected/e610e477-7d95-4af5-be48-f8a9acd81d6a-kube-api-access-4pncg\") pod \"nova-cell0-db-create-b68n2\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.654478 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-logs" (OuterVolumeSpecName: "logs") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.655273 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e610e477-7d95-4af5-be48-f8a9acd81d6a-operator-scripts\") pod \"nova-cell0-db-create-b68n2\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.655696 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.660019 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.660978 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-scripts" (OuterVolumeSpecName: "scripts") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.664294 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c400cc2-a2a1-4204-8300-2b2420ab825e-kube-api-access-6kv8z" (OuterVolumeSpecName: "kube-api-access-6kv8z") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "kube-api-access-6kv8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.668676 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-6ecb-account-create-update-q98t2"] Feb 21 07:06:53 crc kubenswrapper[4820]: E0221 07:06:53.682342 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-log" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.682371 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-log" Feb 21 07:06:53 crc kubenswrapper[4820]: E0221 07:06:53.682388 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-httpd" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.682555 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-httpd" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.682747 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-httpd" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.682755 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-log" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.683203 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6ecb-account-create-update-q98t2"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.683291 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.684984 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.688802 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:60308->10.217.0.152:9292: read: connection reset by peer" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.688814 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:60316->10.217.0.152:9292: read: connection reset by peer" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.692111 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pncg\" (UniqueName: \"kubernetes.io/projected/e610e477-7d95-4af5-be48-f8a9acd81d6a-kube-api-access-4pncg\") pod \"nova-cell0-db-create-b68n2\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.703062 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.757835 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pd8v\" (UniqueName: \"kubernetes.io/projected/324a15c6-a903-420b-8db4-4268008c83d1-kube-api-access-9pd8v\") pod \"nova-cell1-db-create-vdzvw\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759449 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27134bb-c9b2-42d4-bad5-81e7b05874e7-operator-scripts\") pod \"nova-api-a80b-account-create-update-n9j8x\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759499 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwkj6\" (UniqueName: \"kubernetes.io/projected/e27134bb-c9b2-42d4-bad5-81e7b05874e7-kube-api-access-mwkj6\") pod \"nova-api-a80b-account-create-update-n9j8x\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759675 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/324a15c6-a903-420b-8db4-4268008c83d1-operator-scripts\") pod \"nova-cell1-db-create-vdzvw\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759741 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759752 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759760 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759769 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759780 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/5c400cc2-a2a1-4204-8300-2b2420ab825e-kube-api-access-6kv8z\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759798 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.760148 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27134bb-c9b2-42d4-bad5-81e7b05874e7-operator-scripts\") pod \"nova-api-a80b-account-create-update-n9j8x\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.761098 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/324a15c6-a903-420b-8db4-4268008c83d1-operator-scripts\") pod \"nova-cell1-db-create-vdzvw\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.765388 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-config-data" (OuterVolumeSpecName: "config-data") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.769388 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.781620 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pd8v\" (UniqueName: \"kubernetes.io/projected/324a15c6-a903-420b-8db4-4268008c83d1-kube-api-access-9pd8v\") pod \"nova-cell1-db-create-vdzvw\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.782629 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwkj6\" (UniqueName: \"kubernetes.io/projected/e27134bb-c9b2-42d4-bad5-81e7b05874e7-kube-api-access-mwkj6\") pod \"nova-api-a80b-account-create-update-n9j8x\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.801917 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.827829 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.856142 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.863215 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbe51cee-e461-4a5f-86d9-0eb600da3a82-operator-scripts\") pod \"nova-cell0-6ecb-account-create-update-q98t2\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.863298 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nw9s\" (UniqueName: \"kubernetes.io/projected/bbe51cee-e461-4a5f-86d9-0eb600da3a82-kube-api-access-5nw9s\") pod \"nova-cell0-6ecb-account-create-update-q98t2\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.863418 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.863436 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.863446 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.916777 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-96c7-account-create-update-fhgrk"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.931767 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.938726 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.963123 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-96c7-account-create-update-fhgrk"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.964812 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbe51cee-e461-4a5f-86d9-0eb600da3a82-operator-scripts\") pod \"nova-cell0-6ecb-account-create-update-q98t2\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.964883 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nw9s\" (UniqueName: \"kubernetes.io/projected/bbe51cee-e461-4a5f-86d9-0eb600da3a82-kube-api-access-5nw9s\") pod \"nova-cell0-6ecb-account-create-update-q98t2\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.967713 4820 generic.go:334] "Generic (PLEG): container finished" podID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerID="f07c17454301badcf8ab4771e95e8220dd709e96e43e5e64fa93a0170de14464" exitCode=0 Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.967851 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f5a553e-c548-455a-83e2-87f8f71f3067","Type":"ContainerDied","Data":"f07c17454301badcf8ab4771e95e8220dd709e96e43e5e64fa93a0170de14464"} Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.969755 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbe51cee-e461-4a5f-86d9-0eb600da3a82-operator-scripts\") pod \"nova-cell0-6ecb-account-create-update-q98t2\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.975115 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.979752 4820 generic.go:334] "Generic (PLEG): container finished" podID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerID="4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf" exitCode=0 Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.979832 4820 generic.go:334] "Generic (PLEG): container finished" podID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerID="d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418" exitCode=2 Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.979844 4820 generic.go:334] "Generic (PLEG): container finished" podID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerID="3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0" exitCode=0 Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.979866 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerDied","Data":"4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf"} Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.979941 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerDied","Data":"d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418"} Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.979954 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerDied","Data":"3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0"} Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.983607 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nw9s\" (UniqueName: \"kubernetes.io/projected/bbe51cee-e461-4a5f-86d9-0eb600da3a82-kube-api-access-5nw9s\") pod \"nova-cell0-6ecb-account-create-update-q98t2\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.984668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c400cc2-a2a1-4204-8300-2b2420ab825e","Type":"ContainerDied","Data":"55817b22512b4f79b05a91fa0314cc7452c7e5542175c8a9531d82ddc3a3f526"} Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.984754 4820 scope.go:117] "RemoveContainer" containerID="94f8cea32bfbe2dcb3dc478f2ac9ab5b9c23f557b5defcc5e3d635872a87fe5e" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.984782 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.004804 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.027749 4820 scope.go:117] "RemoveContainer" containerID="3e1ff2dd763154f63b65dd4be9fe5f5bcd513f4150395e54156c56ea74a4fb48" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.048476 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.063700 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.068354 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbz8h\" (UniqueName: \"kubernetes.io/projected/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-kube-api-access-mbz8h\") pod \"nova-cell1-96c7-account-create-update-fhgrk\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.068493 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-operator-scripts\") pod \"nova-cell1-96c7-account-create-update-fhgrk\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.080510 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.082000 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.085556 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.086309 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.086823 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.117676 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pjnhh"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.169961 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbz8h\" (UniqueName: \"kubernetes.io/projected/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-kube-api-access-mbz8h\") pod \"nova-cell1-96c7-account-create-update-fhgrk\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.170071 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-operator-scripts\") pod \"nova-cell1-96c7-account-create-update-fhgrk\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.171075 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-operator-scripts\") pod \"nova-cell1-96c7-account-create-update-fhgrk\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.220099 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbz8h\" (UniqueName: \"kubernetes.io/projected/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-kube-api-access-mbz8h\") pod \"nova-cell1-96c7-account-create-update-fhgrk\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271439 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-scripts\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271503 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271528 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-config-data\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271547 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-logs\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271564 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271694 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271732 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dzx8\" (UniqueName: \"kubernetes.io/projected/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-kube-api-access-4dzx8\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.277002 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.371756 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375474 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375564 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dzx8\" (UniqueName: \"kubernetes.io/projected/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-kube-api-access-4dzx8\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375604 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375621 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-scripts\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375647 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375670 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-config-data\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-logs\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375706 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.377253 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-logs\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.377772 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.378386 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.387308 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.388260 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-config-data\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.394205 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.415441 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dzx8\" (UniqueName: \"kubernetes.io/projected/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-kube-api-access-4dzx8\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.430683 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-scripts\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.434860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.478703 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmd5h\" (UniqueName: \"kubernetes.io/projected/1f5a553e-c548-455a-83e2-87f8f71f3067-kube-api-access-zmd5h\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.478997 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-httpd-run\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.479090 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-logs\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.479199 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-config-data\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.479382 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-internal-tls-certs\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.479493 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-scripts\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.479581 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.479673 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-combined-ca-bundle\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.481126 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.481344 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-logs" (OuterVolumeSpecName: "logs") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.485416 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-scripts" (OuterVolumeSpecName: "scripts") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.487398 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f5a553e-c548-455a-83e2-87f8f71f3067-kube-api-access-zmd5h" (OuterVolumeSpecName: "kube-api-access-zmd5h") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "kube-api-access-zmd5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.489455 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.529975 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.547881 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-config-data" (OuterVolumeSpecName: "config-data") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.554077 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.582848 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.583248 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmd5h\" (UniqueName: \"kubernetes.io/projected/1f5a553e-c548-455a-83e2-87f8f71f3067-kube-api-access-zmd5h\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.583408 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.583622 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.583701 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.583872 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.583987 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.584084 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.614326 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.641076 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.668971 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vdzvw"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.692458 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.750967 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6ecb-account-create-update-q98t2"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.793711 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a80b-account-create-update-n9j8x"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.894414 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-b68n2"] Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.018567 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f5a553e-c548-455a-83e2-87f8f71f3067","Type":"ContainerDied","Data":"52abf5a2098d07a4a0de7b8077842d862d702555a35f0737cfb50e48aa1ad9fd"} Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.018628 4820 scope.go:117] "RemoveContainer" containerID="f07c17454301badcf8ab4771e95e8220dd709e96e43e5e64fa93a0170de14464" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.018754 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.026878 4820 generic.go:334] "Generic (PLEG): container finished" podID="1fa19e90-7854-4eb9-9b72-26c8d0739851" containerID="54118e9818d7276160841e63d567ac3e54c21ac7cf2b86b070a7bea2245976ec" exitCode=0 Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.026999 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pjnhh" event={"ID":"1fa19e90-7854-4eb9-9b72-26c8d0739851","Type":"ContainerDied","Data":"54118e9818d7276160841e63d567ac3e54c21ac7cf2b86b070a7bea2245976ec"} Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.027032 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pjnhh" event={"ID":"1fa19e90-7854-4eb9-9b72-26c8d0739851","Type":"ContainerStarted","Data":"8fcf3b62b09cc7c5fb997c8802705a5d6f14b9b14b1e93d39ca843241e67ca24"} Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.032334 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vdzvw" event={"ID":"324a15c6-a903-420b-8db4-4268008c83d1","Type":"ContainerStarted","Data":"7ae594b8acd25b250e0b397c453bfccd82d4cdfe17cc49f7535da3a8a40fcc1f"} Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.036780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a80b-account-create-update-n9j8x" event={"ID":"e27134bb-c9b2-42d4-bad5-81e7b05874e7","Type":"ContainerStarted","Data":"c62d1b598ca12ca3ef447b230a957a6ca222b2abcd68ccdf032833cfe33c6549"} Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.046680 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b68n2" event={"ID":"e610e477-7d95-4af5-be48-f8a9acd81d6a","Type":"ContainerStarted","Data":"3387d4191f1769cf4932444349d1da8e3c1840dbe23238ce666e4230b0ce3e70"} Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.052268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" event={"ID":"bbe51cee-e461-4a5f-86d9-0eb600da3a82","Type":"ContainerStarted","Data":"be924def0b2a9b3a4222f8343b7d95f0374d522834a949670480b5db5a155cad"} Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.054630 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-96c7-account-create-update-fhgrk"] Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.100259 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.175591 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7777947948-b8bjv"] Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.175995 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7777947948-b8bjv" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-api" containerID="cri-o://336d7e018fc3ba9ca31cabbde804230c2c9a2a352511b16336cc0f2ad7e63c2b" gracePeriod=30 Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.176069 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7777947948-b8bjv" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-httpd" containerID="cri-o://47540e3342615d58fd4f14384685d36d1d488276912b091d77e02f8d31604449" gracePeriod=30 Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.225548 4820 scope.go:117] "RemoveContainer" containerID="f1065ea92f9064f45c2733a25acd9f61b2299b2724994ced2d00c91a6cdebca4" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.240903 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.282294 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.307285 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:06:55 crc kubenswrapper[4820]: E0221 07:06:55.307992 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-httpd" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.308005 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-httpd" Feb 21 07:06:55 crc kubenswrapper[4820]: E0221 07:06:55.308015 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-log" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.308022 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-log" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.308356 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-log" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.308382 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-httpd" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.310667 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.313861 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.314078 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.324807 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:06:55 crc kubenswrapper[4820]: W0221 07:06:55.360889 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef3827c2_ee55_4f86_a752_d7cbc9c6454e.slice/crio-7dbefddbd7787a89f99dc670daea40f0d47cd75502d636a14167dff4a8fa59e9 WatchSource:0}: Error finding container 7dbefddbd7787a89f99dc670daea40f0d47cd75502d636a14167dff4a8fa59e9: Status 404 returned error can't find the container with id 7dbefddbd7787a89f99dc670daea40f0d47cd75502d636a14167dff4a8fa59e9 Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.375332 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553466 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553535 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553566 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbqb8\" (UniqueName: \"kubernetes.io/projected/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-kube-api-access-tbqb8\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553596 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553627 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553652 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553701 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553741 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-logs\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673266 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbqb8\" (UniqueName: \"kubernetes.io/projected/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-kube-api-access-tbqb8\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673531 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673562 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673577 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673619 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-logs\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673729 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673752 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.674172 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.687968 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-logs\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.692961 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.700437 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.721112 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.723220 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.725744 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.726590 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.727012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbqb8\" (UniqueName: \"kubernetes.io/projected/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-kube-api-access-tbqb8\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.730558 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" path="/var/lib/kubelet/pods/1f5a553e-c548-455a-83e2-87f8f71f3067/volumes" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.731532 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" path="/var/lib/kubelet/pods/5c400cc2-a2a1-4204-8300-2b2420ab825e/volumes" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.757667 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.774575 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-config-data\") pod \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.774622 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-log-httpd\") pod \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.774649 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-scripts\") pod \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.774720 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-sg-core-conf-yaml\") pod \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.774814 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-combined-ca-bundle\") pod \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.774866 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-run-httpd\") pod \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.774899 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsc2l\" (UniqueName: \"kubernetes.io/projected/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-kube-api-access-wsc2l\") pod \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.778166 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aa5aec23-74ee-4fc2-9fac-6039b558ec3d" (UID: "aa5aec23-74ee-4fc2-9fac-6039b558ec3d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.778563 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aa5aec23-74ee-4fc2-9fac-6039b558ec3d" (UID: "aa5aec23-74ee-4fc2-9fac-6039b558ec3d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.780639 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-kube-api-access-wsc2l" (OuterVolumeSpecName: "kube-api-access-wsc2l") pod "aa5aec23-74ee-4fc2-9fac-6039b558ec3d" (UID: "aa5aec23-74ee-4fc2-9fac-6039b558ec3d"). InnerVolumeSpecName "kube-api-access-wsc2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.797224 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-scripts" (OuterVolumeSpecName: "scripts") pod "aa5aec23-74ee-4fc2-9fac-6039b558ec3d" (UID: "aa5aec23-74ee-4fc2-9fac-6039b558ec3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.869725 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aa5aec23-74ee-4fc2-9fac-6039b558ec3d" (UID: "aa5aec23-74ee-4fc2-9fac-6039b558ec3d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.880708 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.880750 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsc2l\" (UniqueName: \"kubernetes.io/projected/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-kube-api-access-wsc2l\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.880763 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.880774 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.880785 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.936695 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.958636 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-config-data" (OuterVolumeSpecName: "config-data") pod "aa5aec23-74ee-4fc2-9fac-6039b558ec3d" (UID: "aa5aec23-74ee-4fc2-9fac-6039b558ec3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.974184 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa5aec23-74ee-4fc2-9fac-6039b558ec3d" (UID: "aa5aec23-74ee-4fc2-9fac-6039b558ec3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.983443 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.983474 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.086715 4820 generic.go:334] "Generic (PLEG): container finished" podID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerID="0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b" exitCode=0 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.087185 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.087314 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerDied","Data":"0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.087425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerDied","Data":"c6e4c61f560fdc36ef8818a932ad9b4e68979f45ec64327ab6006d30f510ba75"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.087516 4820 scope.go:117] "RemoveContainer" containerID="4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.100889 4820 generic.go:334] "Generic (PLEG): container finished" podID="e610e477-7d95-4af5-be48-f8a9acd81d6a" containerID="826aef72e76fbab81ee8a9700d6ed1f07cc109d2629349f71b59a9573befe3d1" exitCode=0 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.100982 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b68n2" event={"ID":"e610e477-7d95-4af5-be48-f8a9acd81d6a","Type":"ContainerDied","Data":"826aef72e76fbab81ee8a9700d6ed1f07cc109d2629349f71b59a9573befe3d1"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.108657 4820 generic.go:334] "Generic (PLEG): container finished" podID="bbe51cee-e461-4a5f-86d9-0eb600da3a82" containerID="8de9677e20a8b782d2bcecb9fa76424556258bd3e583a5de8910cd040771e0ad" exitCode=0 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.108852 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" event={"ID":"bbe51cee-e461-4a5f-86d9-0eb600da3a82","Type":"ContainerDied","Data":"8de9677e20a8b782d2bcecb9fa76424556258bd3e583a5de8910cd040771e0ad"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.114804 4820 generic.go:334] "Generic (PLEG): container finished" podID="d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5" containerID="bae2eaf1b1365374df39b8e13452ae986ea6ebeb55baae9a5ee7d5811ab1d647" exitCode=0 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.114898 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" event={"ID":"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5","Type":"ContainerDied","Data":"bae2eaf1b1365374df39b8e13452ae986ea6ebeb55baae9a5ee7d5811ab1d647"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.114927 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" event={"ID":"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5","Type":"ContainerStarted","Data":"d0463a86f850111d2b19d6b506160ff9ee874e80ebcf93e0f2794300be9175a2"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.148347 4820 scope.go:117] "RemoveContainer" containerID="d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.154180 4820 generic.go:334] "Generic (PLEG): container finished" podID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerID="47540e3342615d58fd4f14384685d36d1d488276912b091d77e02f8d31604449" exitCode=0 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.154230 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7777947948-b8bjv" event={"ID":"5cfa00dc-af93-49c8-ac1b-67cea9851389","Type":"ContainerDied","Data":"47540e3342615d58fd4f14384685d36d1d488276912b091d77e02f8d31604449"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.155421 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3827c2-ee55-4f86-a752-d7cbc9c6454e","Type":"ContainerStarted","Data":"7dbefddbd7787a89f99dc670daea40f0d47cd75502d636a14167dff4a8fa59e9"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.156565 4820 generic.go:334] "Generic (PLEG): container finished" podID="324a15c6-a903-420b-8db4-4268008c83d1" containerID="0bec83aee0f9a29a60415108651d81b24d0de435829325f2cb93c8d2a1d9ae61" exitCode=0 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.156655 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vdzvw" event={"ID":"324a15c6-a903-420b-8db4-4268008c83d1","Type":"ContainerDied","Data":"0bec83aee0f9a29a60415108651d81b24d0de435829325f2cb93c8d2a1d9ae61"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.158443 4820 generic.go:334] "Generic (PLEG): container finished" podID="e27134bb-c9b2-42d4-bad5-81e7b05874e7" containerID="fdbb90e329836ac7456cf06344114203e75f7f1a57280874e8b064833b913f8e" exitCode=0 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.158692 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a80b-account-create-update-n9j8x" event={"ID":"e27134bb-c9b2-42d4-bad5-81e7b05874e7","Type":"ContainerDied","Data":"fdbb90e329836ac7456cf06344114203e75f7f1a57280874e8b064833b913f8e"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.184021 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.201533 4820 scope.go:117] "RemoveContainer" containerID="3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.202801 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.217454 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.217894 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-central-agent" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.217906 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-central-agent" Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.217915 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-notification-agent" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.217922 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-notification-agent" Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.217938 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="sg-core" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.217944 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="sg-core" Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.217965 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="proxy-httpd" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.217970 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="proxy-httpd" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.218136 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-central-agent" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.218149 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="sg-core" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.218157 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-notification-agent" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.218167 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="proxy-httpd" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.219689 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.221477 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.223476 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.278666 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.287382 4820 scope.go:117] "RemoveContainer" containerID="0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.328387 4820 scope.go:117] "RemoveContainer" containerID="4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf" Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.329865 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf\": container with ID starting with 4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf not found: ID does not exist" containerID="4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.329897 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf"} err="failed to get container status \"4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf\": rpc error: code = NotFound desc = could not find container \"4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf\": container with ID starting with 4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf not found: ID does not exist" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.329921 4820 scope.go:117] "RemoveContainer" containerID="d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418" Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.332114 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418\": container with ID starting with d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418 not found: ID does not exist" containerID="d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.332158 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418"} err="failed to get container status \"d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418\": rpc error: code = NotFound desc = could not find container \"d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418\": container with ID starting with d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418 not found: ID does not exist" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.332182 4820 scope.go:117] "RemoveContainer" containerID="3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0" Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.344323 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0\": container with ID starting with 3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0 not found: ID does not exist" containerID="3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.344547 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0"} err="failed to get container status \"3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0\": rpc error: code = NotFound desc = could not find container \"3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0\": container with ID starting with 3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0 not found: ID does not exist" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.344574 4820 scope.go:117] "RemoveContainer" containerID="0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b" Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.355931 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b\": container with ID starting with 0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b not found: ID does not exist" containerID="0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.355966 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b"} err="failed to get container status \"0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b\": rpc error: code = NotFound desc = could not find container \"0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b\": container with ID starting with 0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b not found: ID does not exist" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.390495 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-scripts\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.390554 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k7x4\" (UniqueName: \"kubernetes.io/projected/f26b2ff3-30ed-493c-a041-e23ebe440501-kube-api-access-9k7x4\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.390578 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-config-data\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.390617 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-run-httpd\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.390643 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.390668 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.390716 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-log-httpd\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.496129 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-run-httpd\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.496185 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.496213 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.496283 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-log-httpd\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.496402 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-scripts\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.497138 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k7x4\" (UniqueName: \"kubernetes.io/projected/f26b2ff3-30ed-493c-a041-e23ebe440501-kube-api-access-9k7x4\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.497169 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-config-data\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.500349 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-run-httpd\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.500562 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-log-httpd\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.501837 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-config-data\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.508092 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.508312 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-scripts\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.508521 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.526726 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k7x4\" (UniqueName: \"kubernetes.io/projected/f26b2ff3-30ed-493c-a041-e23ebe440501-kube-api-access-9k7x4\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.572038 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.620165 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:06:56 crc kubenswrapper[4820]: W0221 07:06:56.640598 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a9bb0a5_0caa_4137_b448_a2b55d9be1ff.slice/crio-b772f97d4d573dc6a8384e377410403688e82c34f3155619a1ec77398b45ecb4 WatchSource:0}: Error finding container b772f97d4d573dc6a8384e377410403688e82c34f3155619a1ec77398b45ecb4: Status 404 returned error can't find the container with id b772f97d4d573dc6a8384e377410403688e82c34f3155619a1ec77398b45ecb4 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.644159 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.808930 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2snh\" (UniqueName: \"kubernetes.io/projected/1fa19e90-7854-4eb9-9b72-26c8d0739851-kube-api-access-t2snh\") pod \"1fa19e90-7854-4eb9-9b72-26c8d0739851\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.810374 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fa19e90-7854-4eb9-9b72-26c8d0739851-operator-scripts\") pod \"1fa19e90-7854-4eb9-9b72-26c8d0739851\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.814046 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa19e90-7854-4eb9-9b72-26c8d0739851-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1fa19e90-7854-4eb9-9b72-26c8d0739851" (UID: "1fa19e90-7854-4eb9-9b72-26c8d0739851"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.815545 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa19e90-7854-4eb9-9b72-26c8d0739851-kube-api-access-t2snh" (OuterVolumeSpecName: "kube-api-access-t2snh") pod "1fa19e90-7854-4eb9-9b72-26c8d0739851" (UID: "1fa19e90-7854-4eb9-9b72-26c8d0739851"). InnerVolumeSpecName "kube-api-access-t2snh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.903639 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-85dd5db455-fl7mt" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": dial tcp 10.217.0.155:9696: i/o timeout" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.912871 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fa19e90-7854-4eb9-9b72-26c8d0739851-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.912911 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2snh\" (UniqueName: \"kubernetes.io/projected/1fa19e90-7854-4eb9-9b72-26c8d0739851-kube-api-access-t2snh\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.160748 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:57 crc kubenswrapper[4820]: W0221 07:06:57.163123 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf26b2ff3_30ed_493c_a041_e23ebe440501.slice/crio-602e0f7ba4052c49a23a9fff17d69d77cdcec617d35ca3267b743f85867d48ff WatchSource:0}: Error finding container 602e0f7ba4052c49a23a9fff17d69d77cdcec617d35ca3267b743f85867d48ff: Status 404 returned error can't find the container with id 602e0f7ba4052c49a23a9fff17d69d77cdcec617d35ca3267b743f85867d48ff Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.185704 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.186389 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pjnhh" event={"ID":"1fa19e90-7854-4eb9-9b72-26c8d0739851","Type":"ContainerDied","Data":"8fcf3b62b09cc7c5fb997c8802705a5d6f14b9b14b1e93d39ca843241e67ca24"} Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.186431 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fcf3b62b09cc7c5fb997c8802705a5d6f14b9b14b1e93d39ca843241e67ca24" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.206400 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3827c2-ee55-4f86-a752-d7cbc9c6454e","Type":"ContainerStarted","Data":"0c7af27d09ebb00239341b37c16edf7677edec982563c281c9fa2b1e765704e3"} Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.206460 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3827c2-ee55-4f86-a752-d7cbc9c6454e","Type":"ContainerStarted","Data":"d451738c8f6f4e609144531dffaae738937778e3a27f1cdf9e62e3a7d1480b96"} Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.208991 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff","Type":"ContainerStarted","Data":"b772f97d4d573dc6a8384e377410403688e82c34f3155619a1ec77398b45ecb4"} Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.246420 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.246389001 podStartE2EDuration="3.246389001s" podCreationTimestamp="2026-02-21 07:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:57.233491439 +0000 UTC m=+1192.266575647" watchObservedRunningTime="2026-02-21 07:06:57.246389001 +0000 UTC m=+1192.279473219" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.630217 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.710183 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" path="/var/lib/kubelet/pods/aa5aec23-74ee-4fc2-9fac-6039b558ec3d/volumes" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.728391 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/324a15c6-a903-420b-8db4-4268008c83d1-operator-scripts\") pod \"324a15c6-a903-420b-8db4-4268008c83d1\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.728557 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pd8v\" (UniqueName: \"kubernetes.io/projected/324a15c6-a903-420b-8db4-4268008c83d1-kube-api-access-9pd8v\") pod \"324a15c6-a903-420b-8db4-4268008c83d1\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.731011 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324a15c6-a903-420b-8db4-4268008c83d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "324a15c6-a903-420b-8db4-4268008c83d1" (UID: "324a15c6-a903-420b-8db4-4268008c83d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.734554 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/324a15c6-a903-420b-8db4-4268008c83d1-kube-api-access-9pd8v" (OuterVolumeSpecName: "kube-api-access-9pd8v") pod "324a15c6-a903-420b-8db4-4268008c83d1" (UID: "324a15c6-a903-420b-8db4-4268008c83d1"). InnerVolumeSpecName "kube-api-access-9pd8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.832393 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/324a15c6-a903-420b-8db4-4268008c83d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.832422 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pd8v\" (UniqueName: \"kubernetes.io/projected/324a15c6-a903-420b-8db4-4268008c83d1-kube-api-access-9pd8v\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.850846 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.864182 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.900540 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.933058 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbz8h\" (UniqueName: \"kubernetes.io/projected/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-kube-api-access-mbz8h\") pod \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.933369 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-operator-scripts\") pod \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.934093 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5" (UID: "d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.934119 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.937143 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-kube-api-access-mbz8h" (OuterVolumeSpecName: "kube-api-access-mbz8h") pod "d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5" (UID: "d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5"). InnerVolumeSpecName "kube-api-access-mbz8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.034776 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nw9s\" (UniqueName: \"kubernetes.io/projected/bbe51cee-e461-4a5f-86d9-0eb600da3a82-kube-api-access-5nw9s\") pod \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.035377 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pncg\" (UniqueName: \"kubernetes.io/projected/e610e477-7d95-4af5-be48-f8a9acd81d6a-kube-api-access-4pncg\") pod \"e610e477-7d95-4af5-be48-f8a9acd81d6a\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.035476 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwkj6\" (UniqueName: \"kubernetes.io/projected/e27134bb-c9b2-42d4-bad5-81e7b05874e7-kube-api-access-mwkj6\") pod \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.035530 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e610e477-7d95-4af5-be48-f8a9acd81d6a-operator-scripts\") pod \"e610e477-7d95-4af5-be48-f8a9acd81d6a\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.035570 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27134bb-c9b2-42d4-bad5-81e7b05874e7-operator-scripts\") pod \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.035642 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbe51cee-e461-4a5f-86d9-0eb600da3a82-operator-scripts\") pod \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.036074 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.036089 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbz8h\" (UniqueName: \"kubernetes.io/projected/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-kube-api-access-mbz8h\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.036549 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27134bb-c9b2-42d4-bad5-81e7b05874e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e27134bb-c9b2-42d4-bad5-81e7b05874e7" (UID: "e27134bb-c9b2-42d4-bad5-81e7b05874e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.036555 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e610e477-7d95-4af5-be48-f8a9acd81d6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e610e477-7d95-4af5-be48-f8a9acd81d6a" (UID: "e610e477-7d95-4af5-be48-f8a9acd81d6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.036590 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe51cee-e461-4a5f-86d9-0eb600da3a82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bbe51cee-e461-4a5f-86d9-0eb600da3a82" (UID: "bbe51cee-e461-4a5f-86d9-0eb600da3a82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.039611 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e610e477-7d95-4af5-be48-f8a9acd81d6a-kube-api-access-4pncg" (OuterVolumeSpecName: "kube-api-access-4pncg") pod "e610e477-7d95-4af5-be48-f8a9acd81d6a" (UID: "e610e477-7d95-4af5-be48-f8a9acd81d6a"). InnerVolumeSpecName "kube-api-access-4pncg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.043652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe51cee-e461-4a5f-86d9-0eb600da3a82-kube-api-access-5nw9s" (OuterVolumeSpecName: "kube-api-access-5nw9s") pod "bbe51cee-e461-4a5f-86d9-0eb600da3a82" (UID: "bbe51cee-e461-4a5f-86d9-0eb600da3a82"). InnerVolumeSpecName "kube-api-access-5nw9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.043697 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27134bb-c9b2-42d4-bad5-81e7b05874e7-kube-api-access-mwkj6" (OuterVolumeSpecName: "kube-api-access-mwkj6") pod "e27134bb-c9b2-42d4-bad5-81e7b05874e7" (UID: "e27134bb-c9b2-42d4-bad5-81e7b05874e7"). InnerVolumeSpecName "kube-api-access-mwkj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.138683 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e610e477-7d95-4af5-be48-f8a9acd81d6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.138722 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27134bb-c9b2-42d4-bad5-81e7b05874e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.138737 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbe51cee-e461-4a5f-86d9-0eb600da3a82-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.138751 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nw9s\" (UniqueName: \"kubernetes.io/projected/bbe51cee-e461-4a5f-86d9-0eb600da3a82-kube-api-access-5nw9s\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.138764 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pncg\" (UniqueName: \"kubernetes.io/projected/e610e477-7d95-4af5-be48-f8a9acd81d6a-kube-api-access-4pncg\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.138775 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwkj6\" (UniqueName: \"kubernetes.io/projected/e27134bb-c9b2-42d4-bad5-81e7b05874e7-kube-api-access-mwkj6\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.219820 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff","Type":"ContainerStarted","Data":"c89955e8456635f9567d07ebef7a2fae175b713a07f50ea3684f6959998a79da"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.222430 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerStarted","Data":"7999ee8773a4aa69576bcdac140bc3498de183ecb7a45046a7dab59909755492"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.222471 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerStarted","Data":"602e0f7ba4052c49a23a9fff17d69d77cdcec617d35ca3267b743f85867d48ff"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.223597 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vdzvw" event={"ID":"324a15c6-a903-420b-8db4-4268008c83d1","Type":"ContainerDied","Data":"7ae594b8acd25b250e0b397c453bfccd82d4cdfe17cc49f7535da3a8a40fcc1f"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.223624 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ae594b8acd25b250e0b397c453bfccd82d4cdfe17cc49f7535da3a8a40fcc1f" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.223675 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.229349 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a80b-account-create-update-n9j8x" event={"ID":"e27134bb-c9b2-42d4-bad5-81e7b05874e7","Type":"ContainerDied","Data":"c62d1b598ca12ca3ef447b230a957a6ca222b2abcd68ccdf032833cfe33c6549"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.229393 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c62d1b598ca12ca3ef447b230a957a6ca222b2abcd68ccdf032833cfe33c6549" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.230318 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.232823 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.233431 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b68n2" event={"ID":"e610e477-7d95-4af5-be48-f8a9acd81d6a","Type":"ContainerDied","Data":"3387d4191f1769cf4932444349d1da8e3c1840dbe23238ce666e4230b0ce3e70"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.233461 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3387d4191f1769cf4932444349d1da8e3c1840dbe23238ce666e4230b0ce3e70" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.237419 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" event={"ID":"bbe51cee-e461-4a5f-86d9-0eb600da3a82","Type":"ContainerDied","Data":"be924def0b2a9b3a4222f8343b7d95f0374d522834a949670480b5db5a155cad"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.237516 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be924def0b2a9b3a4222f8343b7d95f0374d522834a949670480b5db5a155cad" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.237569 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.241684 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.247508 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" event={"ID":"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5","Type":"ContainerDied","Data":"d0463a86f850111d2b19d6b506160ff9ee874e80ebcf93e0f2794300be9175a2"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.247621 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0463a86f850111d2b19d6b506160ff9ee874e80ebcf93e0f2794300be9175a2" Feb 21 07:06:59 crc kubenswrapper[4820]: I0221 07:06:59.118581 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:59 crc kubenswrapper[4820]: I0221 07:06:59.267282 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff","Type":"ContainerStarted","Data":"c6eec58d937060e917865b55d6939557fd730b3dc3294db9f26e433da11bcf3a"} Feb 21 07:06:59 crc kubenswrapper[4820]: I0221 07:06:59.271464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerStarted","Data":"67805b4f532b6b09b25e3e5122cd2ee5e37f1c1848e63607e7c052fcb1189faa"} Feb 21 07:06:59 crc kubenswrapper[4820]: I0221 07:06:59.271518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerStarted","Data":"57acf0d445c45201c28b7e61b71fd9e047cbc8a5dd4b6966f3be185a4c61d585"} Feb 21 07:06:59 crc kubenswrapper[4820]: I0221 07:06:59.304694 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.304672676 podStartE2EDuration="4.304672676s" podCreationTimestamp="2026-02-21 07:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:59.291962319 +0000 UTC m=+1194.325046527" watchObservedRunningTime="2026-02-21 07:06:59.304672676 +0000 UTC m=+1194.337756874" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.284666 4820 generic.go:334] "Generic (PLEG): container finished" podID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerID="336d7e018fc3ba9ca31cabbde804230c2c9a2a352511b16336cc0f2ad7e63c2b" exitCode=0 Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.284913 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7777947948-b8bjv" event={"ID":"5cfa00dc-af93-49c8-ac1b-67cea9851389","Type":"ContainerDied","Data":"336d7e018fc3ba9ca31cabbde804230c2c9a2a352511b16336cc0f2ad7e63c2b"} Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.408140 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.493886 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-ovndb-tls-certs\") pod \"5cfa00dc-af93-49c8-ac1b-67cea9851389\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.493953 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-httpd-config\") pod \"5cfa00dc-af93-49c8-ac1b-67cea9851389\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.493984 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-combined-ca-bundle\") pod \"5cfa00dc-af93-49c8-ac1b-67cea9851389\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.494081 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjdn4\" (UniqueName: \"kubernetes.io/projected/5cfa00dc-af93-49c8-ac1b-67cea9851389-kube-api-access-zjdn4\") pod \"5cfa00dc-af93-49c8-ac1b-67cea9851389\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.494132 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-config\") pod \"5cfa00dc-af93-49c8-ac1b-67cea9851389\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.499387 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cfa00dc-af93-49c8-ac1b-67cea9851389-kube-api-access-zjdn4" (OuterVolumeSpecName: "kube-api-access-zjdn4") pod "5cfa00dc-af93-49c8-ac1b-67cea9851389" (UID: "5cfa00dc-af93-49c8-ac1b-67cea9851389"). InnerVolumeSpecName "kube-api-access-zjdn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.500426 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5cfa00dc-af93-49c8-ac1b-67cea9851389" (UID: "5cfa00dc-af93-49c8-ac1b-67cea9851389"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.559626 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-config" (OuterVolumeSpecName: "config") pod "5cfa00dc-af93-49c8-ac1b-67cea9851389" (UID: "5cfa00dc-af93-49c8-ac1b-67cea9851389"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.568796 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cfa00dc-af93-49c8-ac1b-67cea9851389" (UID: "5cfa00dc-af93-49c8-ac1b-67cea9851389"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.587782 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5cfa00dc-af93-49c8-ac1b-67cea9851389" (UID: "5cfa00dc-af93-49c8-ac1b-67cea9851389"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.595781 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjdn4\" (UniqueName: \"kubernetes.io/projected/5cfa00dc-af93-49c8-ac1b-67cea9851389-kube-api-access-zjdn4\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.595818 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.595829 4820 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.595837 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.595849 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.295616 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7777947948-b8bjv" event={"ID":"5cfa00dc-af93-49c8-ac1b-67cea9851389","Type":"ContainerDied","Data":"3e28ba467d144d224a1ff3d02bb67eaf401e7d86630f2424dc064e42e81ffa60"} Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.295922 4820 scope.go:117] "RemoveContainer" containerID="47540e3342615d58fd4f14384685d36d1d488276912b091d77e02f8d31604449" Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.296076 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.303644 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerStarted","Data":"a1ee8699a76a0017afe6e7ed62b20189fd21ac0e6b14a53a848ef5bc27c620d5"} Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.303969 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-central-agent" containerID="cri-o://7999ee8773a4aa69576bcdac140bc3498de183ecb7a45046a7dab59909755492" gracePeriod=30 Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.304082 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="proxy-httpd" containerID="cri-o://a1ee8699a76a0017afe6e7ed62b20189fd21ac0e6b14a53a848ef5bc27c620d5" gracePeriod=30 Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.304164 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="sg-core" containerID="cri-o://67805b4f532b6b09b25e3e5122cd2ee5e37f1c1848e63607e7c052fcb1189faa" gracePeriod=30 Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.304219 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-notification-agent" containerID="cri-o://57acf0d445c45201c28b7e61b71fd9e047cbc8a5dd4b6966f3be185a4c61d585" gracePeriod=30 Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.304953 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.329087 4820 scope.go:117] "RemoveContainer" containerID="336d7e018fc3ba9ca31cabbde804230c2c9a2a352511b16336cc0f2ad7e63c2b" Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.340704 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.089647719 podStartE2EDuration="5.340687295s" podCreationTimestamp="2026-02-21 07:06:56 +0000 UTC" firstStartedPulling="2026-02-21 07:06:57.172610207 +0000 UTC m=+1192.205694405" lastFinishedPulling="2026-02-21 07:07:00.423649783 +0000 UTC m=+1195.456733981" observedRunningTime="2026-02-21 07:07:01.334735961 +0000 UTC m=+1196.367820179" watchObservedRunningTime="2026-02-21 07:07:01.340687295 +0000 UTC m=+1196.373771493" Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.380756 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7777947948-b8bjv"] Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.391834 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7777947948-b8bjv"] Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.708437 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" path="/var/lib/kubelet/pods/5cfa00dc-af93-49c8-ac1b-67cea9851389/volumes" Feb 21 07:07:02 crc kubenswrapper[4820]: I0221 07:07:02.317037 4820 generic.go:334] "Generic (PLEG): container finished" podID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerID="a1ee8699a76a0017afe6e7ed62b20189fd21ac0e6b14a53a848ef5bc27c620d5" exitCode=0 Feb 21 07:07:02 crc kubenswrapper[4820]: I0221 07:07:02.318000 4820 generic.go:334] "Generic (PLEG): container finished" podID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerID="67805b4f532b6b09b25e3e5122cd2ee5e37f1c1848e63607e7c052fcb1189faa" exitCode=2 Feb 21 07:07:02 crc kubenswrapper[4820]: I0221 07:07:02.318098 4820 generic.go:334] "Generic (PLEG): container finished" podID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerID="57acf0d445c45201c28b7e61b71fd9e047cbc8a5dd4b6966f3be185a4c61d585" exitCode=0 Feb 21 07:07:02 crc kubenswrapper[4820]: I0221 07:07:02.317247 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerDied","Data":"a1ee8699a76a0017afe6e7ed62b20189fd21ac0e6b14a53a848ef5bc27c620d5"} Feb 21 07:07:02 crc kubenswrapper[4820]: I0221 07:07:02.318311 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerDied","Data":"67805b4f532b6b09b25e3e5122cd2ee5e37f1c1848e63607e7c052fcb1189faa"} Feb 21 07:07:02 crc kubenswrapper[4820]: I0221 07:07:02.318403 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerDied","Data":"57acf0d445c45201c28b7e61b71fd9e047cbc8a5dd4b6966f3be185a4c61d585"} Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.936500 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bd4bz"] Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937323 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e610e477-7d95-4af5-be48-f8a9acd81d6a" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937341 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e610e477-7d95-4af5-be48-f8a9acd81d6a" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937351 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937358 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937370 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324a15c6-a903-420b-8db4-4268008c83d1" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937377 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="324a15c6-a903-420b-8db4-4268008c83d1" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937395 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe51cee-e461-4a5f-86d9-0eb600da3a82" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937403 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe51cee-e461-4a5f-86d9-0eb600da3a82" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937415 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-httpd" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937423 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-httpd" Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937441 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa19e90-7854-4eb9-9b72-26c8d0739851" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937448 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa19e90-7854-4eb9-9b72-26c8d0739851" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937457 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-api" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937466 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-api" Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937485 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27134bb-c9b2-42d4-bad5-81e7b05874e7" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937492 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27134bb-c9b2-42d4-bad5-81e7b05874e7" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937682 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa19e90-7854-4eb9-9b72-26c8d0739851" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937704 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe51cee-e461-4a5f-86d9-0eb600da3a82" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937725 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937734 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-httpd" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937744 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="324a15c6-a903-420b-8db4-4268008c83d1" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937757 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-api" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937770 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27134bb-c9b2-42d4-bad5-81e7b05874e7" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937779 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e610e477-7d95-4af5-be48-f8a9acd81d6a" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.938607 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.940790 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4z72b" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.942179 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.951031 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bd4bz"] Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.957560 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.061067 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-scripts\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.061150 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.061207 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77dvn\" (UniqueName: \"kubernetes.io/projected/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-kube-api-access-77dvn\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.061266 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-config-data\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.163031 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-config-data\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.163196 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-scripts\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.163304 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.163335 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77dvn\" (UniqueName: \"kubernetes.io/projected/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-kube-api-access-77dvn\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.169042 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.169678 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-config-data\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.172728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-scripts\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.178924 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77dvn\" (UniqueName: \"kubernetes.io/projected/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-kube-api-access-77dvn\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.261545 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.642031 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.642384 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.729311 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.744989 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.760569 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bd4bz"] Feb 21 07:07:05 crc kubenswrapper[4820]: I0221 07:07:05.369269 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" event={"ID":"9f7e07b2-8561-41da-9c7f-ea5d80280d0a","Type":"ContainerStarted","Data":"460eb11279172258e3178108475a861968db26b641defdcf5011ebe38d54ec92"} Feb 21 07:07:05 crc kubenswrapper[4820]: I0221 07:07:05.369664 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 07:07:05 crc kubenswrapper[4820]: I0221 07:07:05.369809 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 07:07:05 crc kubenswrapper[4820]: I0221 07:07:05.938044 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:05 crc kubenswrapper[4820]: I0221 07:07:05.938440 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:05 crc kubenswrapper[4820]: I0221 07:07:05.973808 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:05 crc kubenswrapper[4820]: I0221 07:07:05.986879 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.385629 4820 generic.go:334] "Generic (PLEG): container finished" podID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerID="7999ee8773a4aa69576bcdac140bc3498de183ecb7a45046a7dab59909755492" exitCode=0 Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.385667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerDied","Data":"7999ee8773a4aa69576bcdac140bc3498de183ecb7a45046a7dab59909755492"} Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.386543 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.386790 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.598784 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.723846 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-scripts\") pod \"f26b2ff3-30ed-493c-a041-e23ebe440501\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.723899 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k7x4\" (UniqueName: \"kubernetes.io/projected/f26b2ff3-30ed-493c-a041-e23ebe440501-kube-api-access-9k7x4\") pod \"f26b2ff3-30ed-493c-a041-e23ebe440501\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.724005 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-log-httpd\") pod \"f26b2ff3-30ed-493c-a041-e23ebe440501\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.724057 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-sg-core-conf-yaml\") pod \"f26b2ff3-30ed-493c-a041-e23ebe440501\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.724108 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-config-data\") pod \"f26b2ff3-30ed-493c-a041-e23ebe440501\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.724134 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-combined-ca-bundle\") pod \"f26b2ff3-30ed-493c-a041-e23ebe440501\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.724160 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-run-httpd\") pod \"f26b2ff3-30ed-493c-a041-e23ebe440501\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.725384 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f26b2ff3-30ed-493c-a041-e23ebe440501" (UID: "f26b2ff3-30ed-493c-a041-e23ebe440501"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.725980 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f26b2ff3-30ed-493c-a041-e23ebe440501" (UID: "f26b2ff3-30ed-493c-a041-e23ebe440501"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.729953 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-scripts" (OuterVolumeSpecName: "scripts") pod "f26b2ff3-30ed-493c-a041-e23ebe440501" (UID: "f26b2ff3-30ed-493c-a041-e23ebe440501"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.739583 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26b2ff3-30ed-493c-a041-e23ebe440501-kube-api-access-9k7x4" (OuterVolumeSpecName: "kube-api-access-9k7x4") pod "f26b2ff3-30ed-493c-a041-e23ebe440501" (UID: "f26b2ff3-30ed-493c-a041-e23ebe440501"). InnerVolumeSpecName "kube-api-access-9k7x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.776591 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f26b2ff3-30ed-493c-a041-e23ebe440501" (UID: "f26b2ff3-30ed-493c-a041-e23ebe440501"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.804994 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f26b2ff3-30ed-493c-a041-e23ebe440501" (UID: "f26b2ff3-30ed-493c-a041-e23ebe440501"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.828712 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.828870 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k7x4\" (UniqueName: \"kubernetes.io/projected/f26b2ff3-30ed-493c-a041-e23ebe440501-kube-api-access-9k7x4\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.828964 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.829045 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.829112 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.829177 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.842437 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-config-data" (OuterVolumeSpecName: "config-data") pod "f26b2ff3-30ed-493c-a041-e23ebe440501" (UID: "f26b2ff3-30ed-493c-a041-e23ebe440501"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.931504 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.397515 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.397855 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.397780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerDied","Data":"602e0f7ba4052c49a23a9fff17d69d77cdcec617d35ca3267b743f85867d48ff"} Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.398776 4820 scope.go:117] "RemoveContainer" containerID="a1ee8699a76a0017afe6e7ed62b20189fd21ac0e6b14a53a848ef5bc27c620d5" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.397756 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.401628 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.454185 4820 scope.go:117] "RemoveContainer" containerID="67805b4f532b6b09b25e3e5122cd2ee5e37f1c1848e63607e7c052fcb1189faa" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.463740 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.473965 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.479315 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.487572 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:07 crc kubenswrapper[4820]: E0221 07:07:07.487968 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-central-agent" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.487985 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-central-agent" Feb 21 07:07:07 crc kubenswrapper[4820]: E0221 07:07:07.488005 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-notification-agent" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.488013 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-notification-agent" Feb 21 07:07:07 crc kubenswrapper[4820]: E0221 07:07:07.488043 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="sg-core" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.488052 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="sg-core" Feb 21 07:07:07 crc kubenswrapper[4820]: E0221 07:07:07.488064 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="proxy-httpd" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.488070 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="proxy-httpd" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.488306 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="sg-core" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.488323 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-central-agent" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.488338 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="proxy-httpd" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.488357 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-notification-agent" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.490132 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.492892 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.493333 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.504373 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.528898 4820 scope.go:117] "RemoveContainer" containerID="57acf0d445c45201c28b7e61b71fd9e047cbc8a5dd4b6966f3be185a4c61d585" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.543713 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-scripts\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.543788 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrwmh\" (UniqueName: \"kubernetes.io/projected/31265e58-52ac-4a6c-86b2-ec212e0ed318-kube-api-access-rrwmh\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.543809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.543874 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-run-httpd\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.543894 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-log-httpd\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.543917 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-config-data\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.543971 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.551392 4820 scope.go:117] "RemoveContainer" containerID="7999ee8773a4aa69576bcdac140bc3498de183ecb7a45046a7dab59909755492" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.645593 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-scripts\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.645656 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrwmh\" (UniqueName: \"kubernetes.io/projected/31265e58-52ac-4a6c-86b2-ec212e0ed318-kube-api-access-rrwmh\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.645689 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.645766 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-run-httpd\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.645788 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-log-httpd\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.645807 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-config-data\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.645846 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.646794 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-run-httpd\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.646860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-log-httpd\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.652678 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-config-data\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.652721 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-scripts\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.652725 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.664056 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.664179 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrwmh\" (UniqueName: \"kubernetes.io/projected/31265e58-52ac-4a6c-86b2-ec212e0ed318-kube-api-access-rrwmh\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.709442 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" path="/var/lib/kubelet/pods/f26b2ff3-30ed-493c-a041-e23ebe440501/volumes" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.822284 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:08 crc kubenswrapper[4820]: I0221 07:07:08.301311 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:08 crc kubenswrapper[4820]: I0221 07:07:08.304888 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:12 crc kubenswrapper[4820]: I0221 07:07:12.870670 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:12 crc kubenswrapper[4820]: W0221 07:07:12.875158 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31265e58_52ac_4a6c_86b2_ec212e0ed318.slice/crio-25c9cdbbef70f629279a4f41b39405f02ae6d43bb63394c726f462fd5002be7a WatchSource:0}: Error finding container 25c9cdbbef70f629279a4f41b39405f02ae6d43bb63394c726f462fd5002be7a: Status 404 returned error can't find the container with id 25c9cdbbef70f629279a4f41b39405f02ae6d43bb63394c726f462fd5002be7a Feb 21 07:07:13 crc kubenswrapper[4820]: I0221 07:07:13.460612 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerStarted","Data":"25c9cdbbef70f629279a4f41b39405f02ae6d43bb63394c726f462fd5002be7a"} Feb 21 07:07:13 crc kubenswrapper[4820]: I0221 07:07:13.462782 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" event={"ID":"9f7e07b2-8561-41da-9c7f-ea5d80280d0a","Type":"ContainerStarted","Data":"f3324889fec35626b75b20c53e1108c5e3bcfec60c0afc870568283a3900d80f"} Feb 21 07:07:13 crc kubenswrapper[4820]: I0221 07:07:13.816638 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:07:13 crc kubenswrapper[4820]: I0221 07:07:13.816706 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:07:14 crc kubenswrapper[4820]: I0221 07:07:14.471511 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerStarted","Data":"becbc2643cc69b769ac18f5227bd7ddcb7a1b80bb9f754bac7d9c64e0e943e53"} Feb 21 07:07:14 crc kubenswrapper[4820]: I0221 07:07:14.471826 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerStarted","Data":"fa1157f95ebab043575672bfe021a1abb9b6b0fa51b6e45dd82063699dc6ecf9"} Feb 21 07:07:14 crc kubenswrapper[4820]: I0221 07:07:14.695698 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" podStartSLOduration=4.000802236 podStartE2EDuration="11.695677988s" podCreationTimestamp="2026-02-21 07:07:03 +0000 UTC" firstStartedPulling="2026-02-21 07:07:04.79053702 +0000 UTC m=+1199.823621218" lastFinishedPulling="2026-02-21 07:07:12.485412782 +0000 UTC m=+1207.518496970" observedRunningTime="2026-02-21 07:07:13.481619886 +0000 UTC m=+1208.514704084" watchObservedRunningTime="2026-02-21 07:07:14.695677988 +0000 UTC m=+1209.728762186" Feb 21 07:07:14 crc kubenswrapper[4820]: I0221 07:07:14.710750 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:15 crc kubenswrapper[4820]: I0221 07:07:15.481778 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerStarted","Data":"400d57d6a004990f14afdf231154959e4890616c3cf7d1921676480ca781b28f"} Feb 21 07:07:16 crc kubenswrapper[4820]: I0221 07:07:16.493909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerStarted","Data":"f5c804fb1acb9c9a861723b2c8e5a22293c9ea892e126697bf9690d8f473209d"} Feb 21 07:07:16 crc kubenswrapper[4820]: I0221 07:07:16.494214 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-central-agent" containerID="cri-o://fa1157f95ebab043575672bfe021a1abb9b6b0fa51b6e45dd82063699dc6ecf9" gracePeriod=30 Feb 21 07:07:16 crc kubenswrapper[4820]: I0221 07:07:16.494316 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:07:16 crc kubenswrapper[4820]: I0221 07:07:16.494391 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="proxy-httpd" containerID="cri-o://f5c804fb1acb9c9a861723b2c8e5a22293c9ea892e126697bf9690d8f473209d" gracePeriod=30 Feb 21 07:07:16 crc kubenswrapper[4820]: I0221 07:07:16.494446 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="sg-core" containerID="cri-o://400d57d6a004990f14afdf231154959e4890616c3cf7d1921676480ca781b28f" gracePeriod=30 Feb 21 07:07:16 crc kubenswrapper[4820]: I0221 07:07:16.494483 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-notification-agent" containerID="cri-o://becbc2643cc69b769ac18f5227bd7ddcb7a1b80bb9f754bac7d9c64e0e943e53" gracePeriod=30 Feb 21 07:07:16 crc kubenswrapper[4820]: I0221 07:07:16.528550 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.225479462 podStartE2EDuration="9.528525178s" podCreationTimestamp="2026-02-21 07:07:07 +0000 UTC" firstStartedPulling="2026-02-21 07:07:12.879120794 +0000 UTC m=+1207.912204992" lastFinishedPulling="2026-02-21 07:07:16.18216651 +0000 UTC m=+1211.215250708" observedRunningTime="2026-02-21 07:07:16.520104908 +0000 UTC m=+1211.553189116" watchObservedRunningTime="2026-02-21 07:07:16.528525178 +0000 UTC m=+1211.561609376" Feb 21 07:07:17 crc kubenswrapper[4820]: I0221 07:07:17.508710 4820 generic.go:334] "Generic (PLEG): container finished" podID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerID="400d57d6a004990f14afdf231154959e4890616c3cf7d1921676480ca781b28f" exitCode=2 Feb 21 07:07:17 crc kubenswrapper[4820]: I0221 07:07:17.509041 4820 generic.go:334] "Generic (PLEG): container finished" podID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerID="becbc2643cc69b769ac18f5227bd7ddcb7a1b80bb9f754bac7d9c64e0e943e53" exitCode=0 Feb 21 07:07:17 crc kubenswrapper[4820]: I0221 07:07:17.509078 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerDied","Data":"400d57d6a004990f14afdf231154959e4890616c3cf7d1921676480ca781b28f"} Feb 21 07:07:17 crc kubenswrapper[4820]: I0221 07:07:17.509116 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerDied","Data":"becbc2643cc69b769ac18f5227bd7ddcb7a1b80bb9f754bac7d9c64e0e943e53"} Feb 21 07:07:20 crc kubenswrapper[4820]: I0221 07:07:20.534693 4820 generic.go:334] "Generic (PLEG): container finished" podID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerID="fa1157f95ebab043575672bfe021a1abb9b6b0fa51b6e45dd82063699dc6ecf9" exitCode=0 Feb 21 07:07:20 crc kubenswrapper[4820]: I0221 07:07:20.534916 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerDied","Data":"fa1157f95ebab043575672bfe021a1abb9b6b0fa51b6e45dd82063699dc6ecf9"} Feb 21 07:07:22 crc kubenswrapper[4820]: I0221 07:07:22.551902 4820 generic.go:334] "Generic (PLEG): container finished" podID="9f7e07b2-8561-41da-9c7f-ea5d80280d0a" containerID="f3324889fec35626b75b20c53e1108c5e3bcfec60c0afc870568283a3900d80f" exitCode=0 Feb 21 07:07:22 crc kubenswrapper[4820]: I0221 07:07:22.551948 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" event={"ID":"9f7e07b2-8561-41da-9c7f-ea5d80280d0a","Type":"ContainerDied","Data":"f3324889fec35626b75b20c53e1108c5e3bcfec60c0afc870568283a3900d80f"} Feb 21 07:07:23 crc kubenswrapper[4820]: I0221 07:07:23.896954 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.041030 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-combined-ca-bundle\") pod \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.041095 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-config-data\") pod \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.041178 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77dvn\" (UniqueName: \"kubernetes.io/projected/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-kube-api-access-77dvn\") pod \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.041319 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-scripts\") pod \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.046960 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-kube-api-access-77dvn" (OuterVolumeSpecName: "kube-api-access-77dvn") pod "9f7e07b2-8561-41da-9c7f-ea5d80280d0a" (UID: "9f7e07b2-8561-41da-9c7f-ea5d80280d0a"). InnerVolumeSpecName "kube-api-access-77dvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.048969 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-scripts" (OuterVolumeSpecName: "scripts") pod "9f7e07b2-8561-41da-9c7f-ea5d80280d0a" (UID: "9f7e07b2-8561-41da-9c7f-ea5d80280d0a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.066598 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f7e07b2-8561-41da-9c7f-ea5d80280d0a" (UID: "9f7e07b2-8561-41da-9c7f-ea5d80280d0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.068517 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-config-data" (OuterVolumeSpecName: "config-data") pod "9f7e07b2-8561-41da-9c7f-ea5d80280d0a" (UID: "9f7e07b2-8561-41da-9c7f-ea5d80280d0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.144062 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.144109 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.144126 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.144139 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77dvn\" (UniqueName: \"kubernetes.io/projected/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-kube-api-access-77dvn\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.571326 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" event={"ID":"9f7e07b2-8561-41da-9c7f-ea5d80280d0a","Type":"ContainerDied","Data":"460eb11279172258e3178108475a861968db26b641defdcf5011ebe38d54ec92"} Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.571369 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="460eb11279172258e3178108475a861968db26b641defdcf5011ebe38d54ec92" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.571431 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.683055 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 07:07:24 crc kubenswrapper[4820]: E0221 07:07:24.683477 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7e07b2-8561-41da-9c7f-ea5d80280d0a" containerName="nova-cell0-conductor-db-sync" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.683490 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7e07b2-8561-41da-9c7f-ea5d80280d0a" containerName="nova-cell0-conductor-db-sync" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.683717 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7e07b2-8561-41da-9c7f-ea5d80280d0a" containerName="nova-cell0-conductor-db-sync" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.684313 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.686424 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4z72b" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.686621 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.708831 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.857918 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.858045 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhdnv\" (UniqueName: \"kubernetes.io/projected/8c841249-7293-4826-b05f-e4a189aaef07-kube-api-access-jhdnv\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.858071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.959356 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.959429 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhdnv\" (UniqueName: \"kubernetes.io/projected/8c841249-7293-4826-b05f-e4a189aaef07-kube-api-access-jhdnv\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.959460 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.964890 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.970379 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.975910 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhdnv\" (UniqueName: \"kubernetes.io/projected/8c841249-7293-4826-b05f-e4a189aaef07-kube-api-access-jhdnv\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:25 crc kubenswrapper[4820]: I0221 07:07:25.003210 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:25 crc kubenswrapper[4820]: I0221 07:07:25.436149 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 07:07:25 crc kubenswrapper[4820]: I0221 07:07:25.583792 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8c841249-7293-4826-b05f-e4a189aaef07","Type":"ContainerStarted","Data":"b5d7777c4805cb6f20d3b114fa2f8d4c4b48ab9ca066a18749eb9c88daef742c"} Feb 21 07:07:27 crc kubenswrapper[4820]: I0221 07:07:27.609611 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8c841249-7293-4826-b05f-e4a189aaef07","Type":"ContainerStarted","Data":"498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae"} Feb 21 07:07:27 crc kubenswrapper[4820]: I0221 07:07:27.612166 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:27 crc kubenswrapper[4820]: I0221 07:07:27.636052 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.6360309600000003 podStartE2EDuration="3.63603096s" podCreationTimestamp="2026-02-21 07:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:27.633058429 +0000 UTC m=+1222.666142657" watchObservedRunningTime="2026-02-21 07:07:27.63603096 +0000 UTC m=+1222.669115178" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.027836 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.455733 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-zwzx4"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.456982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.460526 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.461949 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.472295 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zwzx4"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.649658 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.652151 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.654852 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.657258 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-config-data\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.658070 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-scripts\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.658396 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrtgr\" (UniqueName: \"kubernetes.io/projected/4d96a68b-1b90-4fcd-9716-679be14d3157-kube-api-access-nrtgr\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.658454 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.665177 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.759492 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-scripts\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.759567 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrtgr\" (UniqueName: \"kubernetes.io/projected/4d96a68b-1b90-4fcd-9716-679be14d3157-kube-api-access-nrtgr\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.759593 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.759627 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbz8z\" (UniqueName: \"kubernetes.io/projected/df035ce1-8e9b-4e72-a751-a56a7a2a613a-kube-api-access-gbz8z\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.760104 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.760147 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-config-data\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.760290 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-config-data\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.778976 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-config-data\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.779332 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-scripts\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.781829 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.862911 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.863008 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-config-data\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.863095 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbz8z\" (UniqueName: \"kubernetes.io/projected/df035ce1-8e9b-4e72-a751-a56a7a2a613a-kube-api-access-gbz8z\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.868813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.869160 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.871521 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.878539 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-config-data\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.888930 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.890359 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.896486 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.897434 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.900792 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrtgr\" (UniqueName: \"kubernetes.io/projected/4d96a68b-1b90-4fcd-9716-679be14d3157-kube-api-access-nrtgr\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.914335 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.932252 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.933308 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.954493 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.960279 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.996583 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbz8z\" (UniqueName: \"kubernetes.io/projected/df035ce1-8e9b-4e72-a751-a56a7a2a613a-kube-api-access-gbz8z\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.004302 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067182 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61942ced-fcab-4240-b49a-ff65cdeceb00-logs\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067229 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067285 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03a82042-44f5-4238-ba8a-ec7650f46a93-logs\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067308 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-config-data\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067338 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067376 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067400 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqpwm\" (UniqueName: \"kubernetes.io/projected/61942ced-fcab-4240-b49a-ff65cdeceb00-kube-api-access-pqpwm\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067423 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-config-data\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067449 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067466 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbdck\" (UniqueName: \"kubernetes.io/projected/03a82042-44f5-4238-ba8a-ec7650f46a93-kube-api-access-vbdck\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067490 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s75qh\" (UniqueName: \"kubernetes.io/projected/24fcfcd7-30d6-4101-af31-619b24afcb8d-kube-api-access-s75qh\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.089697 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.096426 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-849fff7679-pwg2d"] Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.097925 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.139096 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-pwg2d"] Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61942ced-fcab-4240-b49a-ff65cdeceb00-logs\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170769 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170809 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03a82042-44f5-4238-ba8a-ec7650f46a93-logs\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170831 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-config-data\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170912 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170943 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqpwm\" (UniqueName: \"kubernetes.io/projected/61942ced-fcab-4240-b49a-ff65cdeceb00-kube-api-access-pqpwm\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170969 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-config-data\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61942ced-fcab-4240-b49a-ff65cdeceb00-logs\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.171004 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.171020 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbdck\" (UniqueName: \"kubernetes.io/projected/03a82042-44f5-4238-ba8a-ec7650f46a93-kube-api-access-vbdck\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.171698 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03a82042-44f5-4238-ba8a-ec7650f46a93-logs\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.173691 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s75qh\" (UniqueName: \"kubernetes.io/projected/24fcfcd7-30d6-4101-af31-619b24afcb8d-kube-api-access-s75qh\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.183786 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.184393 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.184821 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-config-data\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.185210 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.192967 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-config-data\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.193507 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.194553 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbdck\" (UniqueName: \"kubernetes.io/projected/03a82042-44f5-4238-ba8a-ec7650f46a93-kube-api-access-vbdck\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.202668 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqpwm\" (UniqueName: \"kubernetes.io/projected/61942ced-fcab-4240-b49a-ff65cdeceb00-kube-api-access-pqpwm\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.204427 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s75qh\" (UniqueName: \"kubernetes.io/projected/24fcfcd7-30d6-4101-af31-619b24afcb8d-kube-api-access-s75qh\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.276917 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz8qq\" (UniqueName: \"kubernetes.io/projected/bc801035-b5e1-4e87-b8a1-c1d9474466c5-kube-api-access-tz8qq\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.276997 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-sb\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.277090 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-svc\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.277118 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-config\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.277133 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-nb\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.277164 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-swift-storage-0\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.279815 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.315883 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.340558 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.351621 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.380588 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-svc\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.380631 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-config\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.380653 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-nb\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.380685 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-swift-storage-0\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.380735 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz8qq\" (UniqueName: \"kubernetes.io/projected/bc801035-b5e1-4e87-b8a1-c1d9474466c5-kube-api-access-tz8qq\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.380770 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-sb\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.381920 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-config\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.382222 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-svc\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.382810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-nb\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.384743 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-sb\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.384940 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-swift-storage-0\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.406361 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz8qq\" (UniqueName: \"kubernetes.io/projected/bc801035-b5e1-4e87-b8a1-c1d9474466c5-kube-api-access-tz8qq\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.438658 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.650042 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zwzx4"] Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.723501 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zwzx4" event={"ID":"4d96a68b-1b90-4fcd-9716-679be14d3157","Type":"ContainerStarted","Data":"37bd9df794135c79db6e9ba865bdb9c7e4ef5f96af9345bee31518d57b7081f7"} Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.840577 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.869732 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6rxdc"] Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.871259 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.877180 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.877544 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.885330 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6rxdc"] Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.988721 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.006611 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-scripts\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.006704 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rcpv\" (UniqueName: \"kubernetes.io/projected/1b3f478b-4142-46b8-a9ca-603e9e1860ac-kube-api-access-2rcpv\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.008637 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.009267 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-config-data\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.110998 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.111295 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-config-data\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.111397 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-scripts\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.111438 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rcpv\" (UniqueName: \"kubernetes.io/projected/1b3f478b-4142-46b8-a9ca-603e9e1860ac-kube-api-access-2rcpv\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.116818 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-config-data\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.118361 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-scripts\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.122074 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.132587 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rcpv\" (UniqueName: \"kubernetes.io/projected/1b3f478b-4142-46b8-a9ca-603e9e1860ac-kube-api-access-2rcpv\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.148463 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.169320 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.239575 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.268149 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-pwg2d"] Feb 21 07:07:37 crc kubenswrapper[4820]: W0221 07:07:37.286855 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc801035_b5e1_4e87_b8a1_c1d9474466c5.slice/crio-56706ef7edc4c45f0fb9cf68555159ed6f3a3b2712a13f674db70d52356a6d75 WatchSource:0}: Error finding container 56706ef7edc4c45f0fb9cf68555159ed6f3a3b2712a13f674db70d52356a6d75: Status 404 returned error can't find the container with id 56706ef7edc4c45f0fb9cf68555159ed6f3a3b2712a13f674db70d52356a6d75 Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.740631 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61942ced-fcab-4240-b49a-ff65cdeceb00","Type":"ContainerStarted","Data":"c40349a2af25367fc0c110fa968f40da829ddde9d2559551b284dfe24a879a9e"} Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.746058 4820 generic.go:334] "Generic (PLEG): container finished" podID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerID="5f5ab8e6435ddfdb4e8c77819cee3cfc2fa9fc05ae6a9ae155da8503f7b0f636" exitCode=0 Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.746130 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" event={"ID":"bc801035-b5e1-4e87-b8a1-c1d9474466c5","Type":"ContainerDied","Data":"5f5ab8e6435ddfdb4e8c77819cee3cfc2fa9fc05ae6a9ae155da8503f7b0f636"} Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.746162 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" event={"ID":"bc801035-b5e1-4e87-b8a1-c1d9474466c5","Type":"ContainerStarted","Data":"56706ef7edc4c45f0fb9cf68555159ed6f3a3b2712a13f674db70d52356a6d75"} Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.759591 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24fcfcd7-30d6-4101-af31-619b24afcb8d","Type":"ContainerStarted","Data":"d0bbd16e326afec3eb0db3b65db6e116903c8b3a5f97fc7f0031dfc181db09dc"} Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.762018 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"df035ce1-8e9b-4e72-a751-a56a7a2a613a","Type":"ContainerStarted","Data":"6e559911a5c0b4319322723a73f4f2e1a523f0fbef9ae966ae10c0602b1eb11b"} Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.762078 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6rxdc"] Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.770167 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03a82042-44f5-4238-ba8a-ec7650f46a93","Type":"ContainerStarted","Data":"43724447d4673266639761e597dc790ef34ac85ca1a755c0b241a37ed12c81c4"} Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.776114 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zwzx4" event={"ID":"4d96a68b-1b90-4fcd-9716-679be14d3157","Type":"ContainerStarted","Data":"e51a0c40d4d4f93896ed1ad8bb07fb842ed12a2ac2a6f114e30bfa929e0c2882"} Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.812696 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-zwzx4" podStartSLOduration=2.812653294 podStartE2EDuration="2.812653294s" podCreationTimestamp="2026-02-21 07:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:37.806515257 +0000 UTC m=+1232.839599465" watchObservedRunningTime="2026-02-21 07:07:37.812653294 +0000 UTC m=+1232.845737512" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.831797 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 21 07:07:38 crc kubenswrapper[4820]: I0221 07:07:38.787332 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" event={"ID":"1b3f478b-4142-46b8-a9ca-603e9e1860ac","Type":"ContainerStarted","Data":"22e61ebbd8028a1aa9eec99aa283035d5fdfc12cc29f2dbc4e516b9b929c2ac2"} Feb 21 07:07:38 crc kubenswrapper[4820]: I0221 07:07:38.789576 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" event={"ID":"bc801035-b5e1-4e87-b8a1-c1d9474466c5","Type":"ContainerStarted","Data":"9b56ec3e0ab84221e159324991d7abf3d8befbacabd9ffbd2b2a9e9b5dadad70"} Feb 21 07:07:38 crc kubenswrapper[4820]: I0221 07:07:38.808456 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" podStartSLOduration=2.808438666 podStartE2EDuration="2.808438666s" podCreationTimestamp="2026-02-21 07:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:38.805589078 +0000 UTC m=+1233.838673296" watchObservedRunningTime="2026-02-21 07:07:38.808438666 +0000 UTC m=+1233.841522864" Feb 21 07:07:39 crc kubenswrapper[4820]: I0221 07:07:39.775332 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:07:39 crc kubenswrapper[4820]: I0221 07:07:39.791970 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:39 crc kubenswrapper[4820]: I0221 07:07:39.802124 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.812743 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"df035ce1-8e9b-4e72-a751-a56a7a2a613a","Type":"ContainerStarted","Data":"fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128"} Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.816931 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-log" containerID="cri-o://da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67" gracePeriod=30 Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.816980 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-metadata" containerID="cri-o://8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43" gracePeriod=30 Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.816876 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03a82042-44f5-4238-ba8a-ec7650f46a93","Type":"ContainerStarted","Data":"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43"} Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.817942 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03a82042-44f5-4238-ba8a-ec7650f46a93","Type":"ContainerStarted","Data":"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67"} Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.822535 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61942ced-fcab-4240-b49a-ff65cdeceb00","Type":"ContainerStarted","Data":"f7fd77b014ee72eca0be4a4c777ce16b6927f8e4f122356935b98249924cfad2"} Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.822567 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61942ced-fcab-4240-b49a-ff65cdeceb00","Type":"ContainerStarted","Data":"9f7f20d400dd7826ec45e2cb589dc07ed34aae16fbcb9165c10870bcc6f36e39"} Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.825501 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" event={"ID":"1b3f478b-4142-46b8-a9ca-603e9e1860ac","Type":"ContainerStarted","Data":"41d8a8ccd5e19ac57e720c85ad185f48f7da5235d29f9404d9f0a52202561714"} Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.830643 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="24fcfcd7-30d6-4101-af31-619b24afcb8d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8f1053354930657be13a47d1867923e155692b07e230c8c0cef421265cc3f890" gracePeriod=30 Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.830851 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24fcfcd7-30d6-4101-af31-619b24afcb8d","Type":"ContainerStarted","Data":"8f1053354930657be13a47d1867923e155692b07e230c8c0cef421265cc3f890"} Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.850607 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.847247325 podStartE2EDuration="5.850585132s" podCreationTimestamp="2026-02-21 07:07:35 +0000 UTC" firstStartedPulling="2026-02-21 07:07:36.852414343 +0000 UTC m=+1231.885498541" lastFinishedPulling="2026-02-21 07:07:39.85575214 +0000 UTC m=+1234.888836348" observedRunningTime="2026-02-21 07:07:40.840568257 +0000 UTC m=+1235.873652455" watchObservedRunningTime="2026-02-21 07:07:40.850585132 +0000 UTC m=+1235.883669330" Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.864977 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.038098933 podStartE2EDuration="5.864960147s" podCreationTimestamp="2026-02-21 07:07:35 +0000 UTC" firstStartedPulling="2026-02-21 07:07:37.124885453 +0000 UTC m=+1232.157969651" lastFinishedPulling="2026-02-21 07:07:39.951746667 +0000 UTC m=+1234.984830865" observedRunningTime="2026-02-21 07:07:40.864561116 +0000 UTC m=+1235.897645324" watchObservedRunningTime="2026-02-21 07:07:40.864960147 +0000 UTC m=+1235.898044345" Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.887816 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" podStartSLOduration=4.887799605 podStartE2EDuration="4.887799605s" podCreationTimestamp="2026-02-21 07:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:40.885957984 +0000 UTC m=+1235.919042192" watchObservedRunningTime="2026-02-21 07:07:40.887799605 +0000 UTC m=+1235.920883803" Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.915794 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.997424601 podStartE2EDuration="5.915776754s" podCreationTimestamp="2026-02-21 07:07:35 +0000 UTC" firstStartedPulling="2026-02-21 07:07:36.980223843 +0000 UTC m=+1232.013308041" lastFinishedPulling="2026-02-21 07:07:39.898575996 +0000 UTC m=+1234.931660194" observedRunningTime="2026-02-21 07:07:40.905430009 +0000 UTC m=+1235.938514207" watchObservedRunningTime="2026-02-21 07:07:40.915776754 +0000 UTC m=+1235.948860952" Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.942261 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.250776759 podStartE2EDuration="5.942212909s" podCreationTimestamp="2026-02-21 07:07:35 +0000 UTC" firstStartedPulling="2026-02-21 07:07:37.165977405 +0000 UTC m=+1232.199061603" lastFinishedPulling="2026-02-21 07:07:39.857413555 +0000 UTC m=+1234.890497753" observedRunningTime="2026-02-21 07:07:40.933759247 +0000 UTC m=+1235.966843465" watchObservedRunningTime="2026-02-21 07:07:40.942212909 +0000 UTC m=+1235.975297107" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.283357 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.318340 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.318393 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.355469 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.501120 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.607294 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-config-data\") pod \"03a82042-44f5-4238-ba8a-ec7650f46a93\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.607355 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03a82042-44f5-4238-ba8a-ec7650f46a93-logs\") pod \"03a82042-44f5-4238-ba8a-ec7650f46a93\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.607405 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbdck\" (UniqueName: \"kubernetes.io/projected/03a82042-44f5-4238-ba8a-ec7650f46a93-kube-api-access-vbdck\") pod \"03a82042-44f5-4238-ba8a-ec7650f46a93\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.607491 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-combined-ca-bundle\") pod \"03a82042-44f5-4238-ba8a-ec7650f46a93\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.607712 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03a82042-44f5-4238-ba8a-ec7650f46a93-logs" (OuterVolumeSpecName: "logs") pod "03a82042-44f5-4238-ba8a-ec7650f46a93" (UID: "03a82042-44f5-4238-ba8a-ec7650f46a93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.608645 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03a82042-44f5-4238-ba8a-ec7650f46a93-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.614628 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a82042-44f5-4238-ba8a-ec7650f46a93-kube-api-access-vbdck" (OuterVolumeSpecName: "kube-api-access-vbdck") pod "03a82042-44f5-4238-ba8a-ec7650f46a93" (UID: "03a82042-44f5-4238-ba8a-ec7650f46a93"). InnerVolumeSpecName "kube-api-access-vbdck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.633918 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-config-data" (OuterVolumeSpecName: "config-data") pod "03a82042-44f5-4238-ba8a-ec7650f46a93" (UID: "03a82042-44f5-4238-ba8a-ec7650f46a93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.639889 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03a82042-44f5-4238-ba8a-ec7650f46a93" (UID: "03a82042-44f5-4238-ba8a-ec7650f46a93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.709557 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbdck\" (UniqueName: \"kubernetes.io/projected/03a82042-44f5-4238-ba8a-ec7650f46a93-kube-api-access-vbdck\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.709868 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.709882 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.841256 4820 generic.go:334] "Generic (PLEG): container finished" podID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerID="8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43" exitCode=0 Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.841287 4820 generic.go:334] "Generic (PLEG): container finished" podID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerID="da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67" exitCode=143 Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.842043 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.862480 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03a82042-44f5-4238-ba8a-ec7650f46a93","Type":"ContainerDied","Data":"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43"} Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.862554 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03a82042-44f5-4238-ba8a-ec7650f46a93","Type":"ContainerDied","Data":"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67"} Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.862566 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03a82042-44f5-4238-ba8a-ec7650f46a93","Type":"ContainerDied","Data":"43724447d4673266639761e597dc790ef34ac85ca1a755c0b241a37ed12c81c4"} Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.862583 4820 scope.go:117] "RemoveContainer" containerID="8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.922919 4820 scope.go:117] "RemoveContainer" containerID="da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.926712 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.945521 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.960511 4820 scope.go:117] "RemoveContainer" containerID="8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43" Feb 21 07:07:41 crc kubenswrapper[4820]: E0221 07:07:41.961156 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43\": container with ID starting with 8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43 not found: ID does not exist" containerID="8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.961200 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43"} err="failed to get container status \"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43\": rpc error: code = NotFound desc = could not find container \"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43\": container with ID starting with 8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43 not found: ID does not exist" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.961246 4820 scope.go:117] "RemoveContainer" containerID="da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67" Feb 21 07:07:41 crc kubenswrapper[4820]: E0221 07:07:41.961682 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67\": container with ID starting with da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67 not found: ID does not exist" containerID="da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.961714 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67"} err="failed to get container status \"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67\": rpc error: code = NotFound desc = could not find container \"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67\": container with ID starting with da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67 not found: ID does not exist" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.961735 4820 scope.go:117] "RemoveContainer" containerID="8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.961758 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:41 crc kubenswrapper[4820]: E0221 07:07:41.962284 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-metadata" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.962308 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-metadata" Feb 21 07:07:41 crc kubenswrapper[4820]: E0221 07:07:41.962369 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-log" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.962379 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-log" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.962593 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-log" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.962631 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-metadata" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.962625 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43"} err="failed to get container status \"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43\": rpc error: code = NotFound desc = could not find container \"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43\": container with ID starting with 8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43 not found: ID does not exist" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.962650 4820 scope.go:117] "RemoveContainer" containerID="da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.963002 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67"} err="failed to get container status \"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67\": rpc error: code = NotFound desc = could not find container \"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67\": container with ID starting with da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67 not found: ID does not exist" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.963847 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.966616 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.966662 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.982957 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.024776 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.024854 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625e0821-44af-4965-aa51-75c1d5839e7c-logs\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.025173 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mptd\" (UniqueName: \"kubernetes.io/projected/625e0821-44af-4965-aa51-75c1d5839e7c-kube-api-access-7mptd\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.025255 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.025287 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-config-data\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.126934 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.126997 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625e0821-44af-4965-aa51-75c1d5839e7c-logs\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.127164 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mptd\" (UniqueName: \"kubernetes.io/projected/625e0821-44af-4965-aa51-75c1d5839e7c-kube-api-access-7mptd\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.127212 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.127254 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-config-data\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.127461 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625e0821-44af-4965-aa51-75c1d5839e7c-logs\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.131151 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.132138 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.133940 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-config-data\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.154473 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mptd\" (UniqueName: \"kubernetes.io/projected/625e0821-44af-4965-aa51-75c1d5839e7c-kube-api-access-7mptd\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.280766 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.751121 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:42 crc kubenswrapper[4820]: W0221 07:07:42.755939 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod625e0821_44af_4965_aa51_75c1d5839e7c.slice/crio-97fb15d00cfc76b9f7c9d18ac0d54f998ad88f4be3de91f63fa0fc834066feec WatchSource:0}: Error finding container 97fb15d00cfc76b9f7c9d18ac0d54f998ad88f4be3de91f63fa0fc834066feec: Status 404 returned error can't find the container with id 97fb15d00cfc76b9f7c9d18ac0d54f998ad88f4be3de91f63fa0fc834066feec Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.854438 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"625e0821-44af-4965-aa51-75c1d5839e7c","Type":"ContainerStarted","Data":"97fb15d00cfc76b9f7c9d18ac0d54f998ad88f4be3de91f63fa0fc834066feec"} Feb 21 07:07:43 crc kubenswrapper[4820]: I0221 07:07:43.705882 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" path="/var/lib/kubelet/pods/03a82042-44f5-4238-ba8a-ec7650f46a93/volumes" Feb 21 07:07:43 crc kubenswrapper[4820]: I0221 07:07:43.816825 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:07:43 crc kubenswrapper[4820]: I0221 07:07:43.816916 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:07:43 crc kubenswrapper[4820]: I0221 07:07:43.863529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"625e0821-44af-4965-aa51-75c1d5839e7c","Type":"ContainerStarted","Data":"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c"} Feb 21 07:07:43 crc kubenswrapper[4820]: I0221 07:07:43.863611 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"625e0821-44af-4965-aa51-75c1d5839e7c","Type":"ContainerStarted","Data":"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341"} Feb 21 07:07:43 crc kubenswrapper[4820]: I0221 07:07:43.903086 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.903066339 podStartE2EDuration="2.903066339s" podCreationTimestamp="2026-02-21 07:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:43.892587052 +0000 UTC m=+1238.925671260" watchObservedRunningTime="2026-02-21 07:07:43.903066339 +0000 UTC m=+1238.936150547" Feb 21 07:07:45 crc kubenswrapper[4820]: I0221 07:07:45.882271 4820 generic.go:334] "Generic (PLEG): container finished" podID="4d96a68b-1b90-4fcd-9716-679be14d3157" containerID="e51a0c40d4d4f93896ed1ad8bb07fb842ed12a2ac2a6f114e30bfa929e0c2882" exitCode=0 Feb 21 07:07:45 crc kubenswrapper[4820]: I0221 07:07:45.882290 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zwzx4" event={"ID":"4d96a68b-1b90-4fcd-9716-679be14d3157","Type":"ContainerDied","Data":"e51a0c40d4d4f93896ed1ad8bb07fb842ed12a2ac2a6f114e30bfa929e0c2882"} Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.281894 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.321163 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.341556 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.341731 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.440498 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.515195 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-n4pc2"] Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.515527 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" podUID="68596d31-1da0-47aa-9330-179af16beee5" containerName="dnsmasq-dns" containerID="cri-o://f652bc5f84c383e4df28b7028766cbc0147be5d396eb0aeb52cbd94dbc2ad6ed" gracePeriod=10 Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.901287 4820 generic.go:334] "Generic (PLEG): container finished" podID="68596d31-1da0-47aa-9330-179af16beee5" containerID="f652bc5f84c383e4df28b7028766cbc0147be5d396eb0aeb52cbd94dbc2ad6ed" exitCode=0 Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.901461 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" event={"ID":"68596d31-1da0-47aa-9330-179af16beee5","Type":"ContainerDied","Data":"f652bc5f84c383e4df28b7028766cbc0147be5d396eb0aeb52cbd94dbc2ad6ed"} Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.904657 4820 generic.go:334] "Generic (PLEG): container finished" podID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerID="f5c804fb1acb9c9a861723b2c8e5a22293c9ea892e126697bf9690d8f473209d" exitCode=137 Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.904822 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerDied","Data":"f5c804fb1acb9c9a861723b2c8e5a22293c9ea892e126697bf9690d8f473209d"} Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.906298 4820 generic.go:334] "Generic (PLEG): container finished" podID="1b3f478b-4142-46b8-a9ca-603e9e1860ac" containerID="41d8a8ccd5e19ac57e720c85ad185f48f7da5235d29f9404d9f0a52202561714" exitCode=0 Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.907232 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" event={"ID":"1b3f478b-4142-46b8-a9ca-603e9e1860ac","Type":"ContainerDied","Data":"41d8a8ccd5e19ac57e720c85ad185f48f7da5235d29f9404d9f0a52202561714"} Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.947869 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.047883 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.211321 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.241611 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-config-data\") pod \"31265e58-52ac-4a6c-86b2-ec212e0ed318\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.241662 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-log-httpd\") pod \"31265e58-52ac-4a6c-86b2-ec212e0ed318\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.241778 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-combined-ca-bundle\") pod \"31265e58-52ac-4a6c-86b2-ec212e0ed318\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.241809 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-run-httpd\") pod \"31265e58-52ac-4a6c-86b2-ec212e0ed318\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.241843 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-scripts\") pod \"31265e58-52ac-4a6c-86b2-ec212e0ed318\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.241904 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-sg-core-conf-yaml\") pod \"31265e58-52ac-4a6c-86b2-ec212e0ed318\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.241935 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrwmh\" (UniqueName: \"kubernetes.io/projected/31265e58-52ac-4a6c-86b2-ec212e0ed318-kube-api-access-rrwmh\") pod \"31265e58-52ac-4a6c-86b2-ec212e0ed318\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.242627 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "31265e58-52ac-4a6c-86b2-ec212e0ed318" (UID: "31265e58-52ac-4a6c-86b2-ec212e0ed318"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.248348 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "31265e58-52ac-4a6c-86b2-ec212e0ed318" (UID: "31265e58-52ac-4a6c-86b2-ec212e0ed318"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.257142 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31265e58-52ac-4a6c-86b2-ec212e0ed318-kube-api-access-rrwmh" (OuterVolumeSpecName: "kube-api-access-rrwmh") pod "31265e58-52ac-4a6c-86b2-ec212e0ed318" (UID: "31265e58-52ac-4a6c-86b2-ec212e0ed318"). InnerVolumeSpecName "kube-api-access-rrwmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.257156 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-scripts" (OuterVolumeSpecName: "scripts") pod "31265e58-52ac-4a6c-86b2-ec212e0ed318" (UID: "31265e58-52ac-4a6c-86b2-ec212e0ed318"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.281529 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.281575 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.286679 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "31265e58-52ac-4a6c-86b2-ec212e0ed318" (UID: "31265e58-52ac-4a6c-86b2-ec212e0ed318"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.317346 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.342881 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-svc\") pod \"68596d31-1da0-47aa-9330-179af16beee5\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.342948 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-sb\") pod \"68596d31-1da0-47aa-9330-179af16beee5\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.342993 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-swift-storage-0\") pod \"68596d31-1da0-47aa-9330-179af16beee5\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343119 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp4lj\" (UniqueName: \"kubernetes.io/projected/68596d31-1da0-47aa-9330-179af16beee5-kube-api-access-mp4lj\") pod \"68596d31-1da0-47aa-9330-179af16beee5\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343209 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-config\") pod \"68596d31-1da0-47aa-9330-179af16beee5\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343287 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-nb\") pod \"68596d31-1da0-47aa-9330-179af16beee5\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343778 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343803 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343814 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343827 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343841 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrwmh\" (UniqueName: \"kubernetes.io/projected/31265e58-52ac-4a6c-86b2-ec212e0ed318-kube-api-access-rrwmh\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.347581 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31265e58-52ac-4a6c-86b2-ec212e0ed318" (UID: "31265e58-52ac-4a6c-86b2-ec212e0ed318"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.350992 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68596d31-1da0-47aa-9330-179af16beee5-kube-api-access-mp4lj" (OuterVolumeSpecName: "kube-api-access-mp4lj") pod "68596d31-1da0-47aa-9330-179af16beee5" (UID: "68596d31-1da0-47aa-9330-179af16beee5"). InnerVolumeSpecName "kube-api-access-mp4lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.388434 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.424867 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "68596d31-1da0-47aa-9330-179af16beee5" (UID: "68596d31-1da0-47aa-9330-179af16beee5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.429607 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.442717 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "68596d31-1da0-47aa-9330-179af16beee5" (UID: "68596d31-1da0-47aa-9330-179af16beee5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.442913 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68596d31-1da0-47aa-9330-179af16beee5" (UID: "68596d31-1da0-47aa-9330-179af16beee5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.444321 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-config" (OuterVolumeSpecName: "config") pod "68596d31-1da0-47aa-9330-179af16beee5" (UID: "68596d31-1da0-47aa-9330-179af16beee5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.445153 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-config-data\") pod \"4d96a68b-1b90-4fcd-9716-679be14d3157\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.445421 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-scripts\") pod \"4d96a68b-1b90-4fcd-9716-679be14d3157\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.445580 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-combined-ca-bundle\") pod \"4d96a68b-1b90-4fcd-9716-679be14d3157\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.445788 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrtgr\" (UniqueName: \"kubernetes.io/projected/4d96a68b-1b90-4fcd-9716-679be14d3157-kube-api-access-nrtgr\") pod \"4d96a68b-1b90-4fcd-9716-679be14d3157\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.447494 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.448161 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.448198 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.448212 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp4lj\" (UniqueName: \"kubernetes.io/projected/68596d31-1da0-47aa-9330-179af16beee5-kube-api-access-mp4lj\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.448223 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.448233 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.449321 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-scripts" (OuterVolumeSpecName: "scripts") pod "4d96a68b-1b90-4fcd-9716-679be14d3157" (UID: "4d96a68b-1b90-4fcd-9716-679be14d3157"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.450039 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "68596d31-1da0-47aa-9330-179af16beee5" (UID: "68596d31-1da0-47aa-9330-179af16beee5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.453345 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-config-data" (OuterVolumeSpecName: "config-data") pod "31265e58-52ac-4a6c-86b2-ec212e0ed318" (UID: "31265e58-52ac-4a6c-86b2-ec212e0ed318"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.454554 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d96a68b-1b90-4fcd-9716-679be14d3157-kube-api-access-nrtgr" (OuterVolumeSpecName: "kube-api-access-nrtgr") pod "4d96a68b-1b90-4fcd-9716-679be14d3157" (UID: "4d96a68b-1b90-4fcd-9716-679be14d3157"). InnerVolumeSpecName "kube-api-access-nrtgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.478900 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d96a68b-1b90-4fcd-9716-679be14d3157" (UID: "4d96a68b-1b90-4fcd-9716-679be14d3157"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.479408 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-config-data" (OuterVolumeSpecName: "config-data") pod "4d96a68b-1b90-4fcd-9716-679be14d3157" (UID: "4d96a68b-1b90-4fcd-9716-679be14d3157"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.549967 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.550003 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.550016 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrtgr\" (UniqueName: \"kubernetes.io/projected/4d96a68b-1b90-4fcd-9716-679be14d3157-kube-api-access-nrtgr\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.550026 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.550033 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.550041 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.921649 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerDied","Data":"25c9cdbbef70f629279a4f41b39405f02ae6d43bb63394c726f462fd5002be7a"} Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.921677 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.921725 4820 scope.go:117] "RemoveContainer" containerID="f5c804fb1acb9c9a861723b2c8e5a22293c9ea892e126697bf9690d8f473209d" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.925433 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" event={"ID":"68596d31-1da0-47aa-9330-179af16beee5","Type":"ContainerDied","Data":"28ad0df7b26bbd0219980c2f8c1104679c4b4d8454ba1005ca678ce2d979fa35"} Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.925545 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.935553 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.936478 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zwzx4" event={"ID":"4d96a68b-1b90-4fcd-9716-679be14d3157","Type":"ContainerDied","Data":"37bd9df794135c79db6e9ba865bdb9c7e4ef5f96af9345bee31518d57b7081f7"} Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.936523 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37bd9df794135c79db6e9ba865bdb9c7e4ef5f96af9345bee31518d57b7081f7" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.956026 4820 scope.go:117] "RemoveContainer" containerID="400d57d6a004990f14afdf231154959e4890616c3cf7d1921676480ca781b28f" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.988331 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.007762 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.012020 4820 scope.go:117] "RemoveContainer" containerID="becbc2643cc69b769ac18f5227bd7ddcb7a1b80bb9f754bac7d9c64e0e943e53" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.028537 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-n4pc2"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.038307 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-n4pc2"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.046870 4820 scope.go:117] "RemoveContainer" containerID="fa1157f95ebab043575672bfe021a1abb9b6b0fa51b6e45dd82063699dc6ecf9" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050120 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: E0221 07:07:48.050547 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68596d31-1da0-47aa-9330-179af16beee5" containerName="dnsmasq-dns" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050563 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="68596d31-1da0-47aa-9330-179af16beee5" containerName="dnsmasq-dns" Feb 21 07:07:48 crc kubenswrapper[4820]: E0221 07:07:48.050579 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="sg-core" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050586 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="sg-core" Feb 21 07:07:48 crc kubenswrapper[4820]: E0221 07:07:48.050599 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68596d31-1da0-47aa-9330-179af16beee5" containerName="init" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050605 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="68596d31-1da0-47aa-9330-179af16beee5" containerName="init" Feb 21 07:07:48 crc kubenswrapper[4820]: E0221 07:07:48.050613 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-central-agent" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050623 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-central-agent" Feb 21 07:07:48 crc kubenswrapper[4820]: E0221 07:07:48.050640 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="proxy-httpd" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050646 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="proxy-httpd" Feb 21 07:07:48 crc kubenswrapper[4820]: E0221 07:07:48.050660 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d96a68b-1b90-4fcd-9716-679be14d3157" containerName="nova-manage" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050667 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d96a68b-1b90-4fcd-9716-679be14d3157" containerName="nova-manage" Feb 21 07:07:48 crc kubenswrapper[4820]: E0221 07:07:48.050686 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-notification-agent" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050692 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-notification-agent" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.054323 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-central-agent" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.054360 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="sg-core" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.054375 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="68596d31-1da0-47aa-9330-179af16beee5" containerName="dnsmasq-dns" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.054387 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-notification-agent" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.054398 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d96a68b-1b90-4fcd-9716-679be14d3157" containerName="nova-manage" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.054409 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="proxy-httpd" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.055971 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.062291 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.063702 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.063803 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.088563 4820 scope.go:117] "RemoveContainer" containerID="f652bc5f84c383e4df28b7028766cbc0147be5d396eb0aeb52cbd94dbc2ad6ed" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.132090 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.132483 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-log" containerID="cri-o://9f7f20d400dd7826ec45e2cb589dc07ed34aae16fbcb9165c10870bcc6f36e39" gracePeriod=30 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.132910 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-api" containerID="cri-o://f7fd77b014ee72eca0be4a4c777ce16b6927f8e4f122356935b98249924cfad2" gracePeriod=30 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.155022 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.169563 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.169754 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.169779 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-log-httpd\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.169815 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7pwm\" (UniqueName: \"kubernetes.io/projected/650275e2-1f20-427a-89b7-de2c084d3b40-kube-api-access-v7pwm\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.169880 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-run-httpd\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.169960 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-scripts\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.169980 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.181524 4820 scope.go:117] "RemoveContainer" containerID="aab33edaeb25dccd647f693bcaba1307465b538dbe3fc05e9d81c6d78bcc4858" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.197584 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.197931 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-log" containerID="cri-o://97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341" gracePeriod=30 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.197984 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-metadata" containerID="cri-o://b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c" gracePeriod=30 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.273084 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-scripts\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.273120 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.273181 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.273280 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.273295 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-log-httpd\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.273314 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7pwm\" (UniqueName: \"kubernetes.io/projected/650275e2-1f20-427a-89b7-de2c084d3b40-kube-api-access-v7pwm\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.273349 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-run-httpd\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.274723 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-log-httpd\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.274944 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-run-httpd\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.278403 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.279388 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.280388 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.287591 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-scripts\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.292971 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7pwm\" (UniqueName: \"kubernetes.io/projected/650275e2-1f20-427a-89b7-de2c084d3b40-kube-api-access-v7pwm\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.414406 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.466693 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.482813 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-config-data\") pod \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.482888 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-combined-ca-bundle\") pod \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.482925 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-scripts\") pod \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.482973 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rcpv\" (UniqueName: \"kubernetes.io/projected/1b3f478b-4142-46b8-a9ca-603e9e1860ac-kube-api-access-2rcpv\") pod \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.487507 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b3f478b-4142-46b8-a9ca-603e9e1860ac-kube-api-access-2rcpv" (OuterVolumeSpecName: "kube-api-access-2rcpv") pod "1b3f478b-4142-46b8-a9ca-603e9e1860ac" (UID: "1b3f478b-4142-46b8-a9ca-603e9e1860ac"). InnerVolumeSpecName "kube-api-access-2rcpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.491685 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-scripts" (OuterVolumeSpecName: "scripts") pod "1b3f478b-4142-46b8-a9ca-603e9e1860ac" (UID: "1b3f478b-4142-46b8-a9ca-603e9e1860ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.529874 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-config-data" (OuterVolumeSpecName: "config-data") pod "1b3f478b-4142-46b8-a9ca-603e9e1860ac" (UID: "1b3f478b-4142-46b8-a9ca-603e9e1860ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.549586 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b3f478b-4142-46b8-a9ca-603e9e1860ac" (UID: "1b3f478b-4142-46b8-a9ca-603e9e1860ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.585215 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.585472 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.585557 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.585628 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rcpv\" (UniqueName: \"kubernetes.io/projected/1b3f478b-4142-46b8-a9ca-603e9e1860ac-kube-api-access-2rcpv\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.799598 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.951320 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" event={"ID":"1b3f478b-4142-46b8-a9ca-603e9e1860ac","Type":"ContainerDied","Data":"22e61ebbd8028a1aa9eec99aa283035d5fdfc12cc29f2dbc4e516b9b929c2ac2"} Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.951363 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e61ebbd8028a1aa9eec99aa283035d5fdfc12cc29f2dbc4e516b9b929c2ac2" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.951434 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.955617 4820 generic.go:334] "Generic (PLEG): container finished" podID="625e0821-44af-4965-aa51-75c1d5839e7c" containerID="b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c" exitCode=0 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.955645 4820 generic.go:334] "Generic (PLEG): container finished" podID="625e0821-44af-4965-aa51-75c1d5839e7c" containerID="97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341" exitCode=143 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.955703 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"625e0821-44af-4965-aa51-75c1d5839e7c","Type":"ContainerDied","Data":"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c"} Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.955739 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"625e0821-44af-4965-aa51-75c1d5839e7c","Type":"ContainerDied","Data":"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341"} Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.955754 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"625e0821-44af-4965-aa51-75c1d5839e7c","Type":"ContainerDied","Data":"97fb15d00cfc76b9f7c9d18ac0d54f998ad88f4be3de91f63fa0fc834066feec"} Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.955704 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.955823 4820 scope.go:117] "RemoveContainer" containerID="b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.960140 4820 generic.go:334] "Generic (PLEG): container finished" podID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerID="9f7f20d400dd7826ec45e2cb589dc07ed34aae16fbcb9165c10870bcc6f36e39" exitCode=143 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.960217 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61942ced-fcab-4240-b49a-ff65cdeceb00","Type":"ContainerDied","Data":"9f7f20d400dd7826ec45e2cb589dc07ed34aae16fbcb9165c10870bcc6f36e39"} Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.973914 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="df035ce1-8e9b-4e72-a751-a56a7a2a613a" containerName="nova-scheduler-scheduler" containerID="cri-o://fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128" gracePeriod=30 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.990710 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.991172 4820 scope.go:117] "RemoveContainer" containerID="97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.994065 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-combined-ca-bundle\") pod \"625e0821-44af-4965-aa51-75c1d5839e7c\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.994104 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-config-data\") pod \"625e0821-44af-4965-aa51-75c1d5839e7c\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.994199 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625e0821-44af-4965-aa51-75c1d5839e7c-logs\") pod \"625e0821-44af-4965-aa51-75c1d5839e7c\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.994324 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-nova-metadata-tls-certs\") pod \"625e0821-44af-4965-aa51-75c1d5839e7c\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.994493 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mptd\" (UniqueName: \"kubernetes.io/projected/625e0821-44af-4965-aa51-75c1d5839e7c-kube-api-access-7mptd\") pod \"625e0821-44af-4965-aa51-75c1d5839e7c\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.994988 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/625e0821-44af-4965-aa51-75c1d5839e7c-logs" (OuterVolumeSpecName: "logs") pod "625e0821-44af-4965-aa51-75c1d5839e7c" (UID: "625e0821-44af-4965-aa51-75c1d5839e7c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.014429 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/625e0821-44af-4965-aa51-75c1d5839e7c-kube-api-access-7mptd" (OuterVolumeSpecName: "kube-api-access-7mptd") pod "625e0821-44af-4965-aa51-75c1d5839e7c" (UID: "625e0821-44af-4965-aa51-75c1d5839e7c"). InnerVolumeSpecName "kube-api-access-7mptd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.028865 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 07:07:49 crc kubenswrapper[4820]: E0221 07:07:49.029397 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-metadata" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.029439 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-metadata" Feb 21 07:07:49 crc kubenswrapper[4820]: E0221 07:07:49.029461 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-log" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.029469 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-log" Feb 21 07:07:49 crc kubenswrapper[4820]: E0221 07:07:49.029506 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3f478b-4142-46b8-a9ca-603e9e1860ac" containerName="nova-cell1-conductor-db-sync" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.029514 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3f478b-4142-46b8-a9ca-603e9e1860ac" containerName="nova-cell1-conductor-db-sync" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.029741 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-log" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.029773 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3f478b-4142-46b8-a9ca-603e9e1860ac" containerName="nova-cell1-conductor-db-sync" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.029788 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-metadata" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.030637 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.042069 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "625e0821-44af-4965-aa51-75c1d5839e7c" (UID: "625e0821-44af-4965-aa51-75c1d5839e7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.043349 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.044758 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.073423 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-config-data" (OuterVolumeSpecName: "config-data") pod "625e0821-44af-4965-aa51-75c1d5839e7c" (UID: "625e0821-44af-4965-aa51-75c1d5839e7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.099187 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.099379 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr5zf\" (UniqueName: \"kubernetes.io/projected/061bac4c-22ff-4144-b114-133ea89494c8-kube-api-access-vr5zf\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.099408 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.099751 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625e0821-44af-4965-aa51-75c1d5839e7c-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.099769 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mptd\" (UniqueName: \"kubernetes.io/projected/625e0821-44af-4965-aa51-75c1d5839e7c-kube-api-access-7mptd\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.099780 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.099790 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.111749 4820 scope.go:117] "RemoveContainer" containerID="b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.113529 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "625e0821-44af-4965-aa51-75c1d5839e7c" (UID: "625e0821-44af-4965-aa51-75c1d5839e7c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:49 crc kubenswrapper[4820]: E0221 07:07:49.114414 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c\": container with ID starting with b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c not found: ID does not exist" containerID="b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.114449 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c"} err="failed to get container status \"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c\": rpc error: code = NotFound desc = could not find container \"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c\": container with ID starting with b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c not found: ID does not exist" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.114488 4820 scope.go:117] "RemoveContainer" containerID="97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341" Feb 21 07:07:49 crc kubenswrapper[4820]: E0221 07:07:49.114775 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341\": container with ID starting with 97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341 not found: ID does not exist" containerID="97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.114804 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341"} err="failed to get container status \"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341\": rpc error: code = NotFound desc = could not find container \"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341\": container with ID starting with 97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341 not found: ID does not exist" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.115025 4820 scope.go:117] "RemoveContainer" containerID="b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.115846 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c"} err="failed to get container status \"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c\": rpc error: code = NotFound desc = could not find container \"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c\": container with ID starting with b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c not found: ID does not exist" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.115893 4820 scope.go:117] "RemoveContainer" containerID="97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.116330 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341"} err="failed to get container status \"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341\": rpc error: code = NotFound desc = could not find container \"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341\": container with ID starting with 97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341 not found: ID does not exist" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.201037 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.201143 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr5zf\" (UniqueName: \"kubernetes.io/projected/061bac4c-22ff-4144-b114-133ea89494c8-kube-api-access-vr5zf\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.201172 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.201379 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.204872 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.207015 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.217810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr5zf\" (UniqueName: \"kubernetes.io/projected/061bac4c-22ff-4144-b114-133ea89494c8-kube-api-access-vr5zf\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.294146 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.310884 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.330318 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.331812 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.339942 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.340130 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.348002 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.402839 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.508399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-config-data\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.508783 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.508820 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.508933 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec23217-e99b-4b39-8be4-d4278275c14b-logs\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.508986 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4chfb\" (UniqueName: \"kubernetes.io/projected/5ec23217-e99b-4b39-8be4-d4278275c14b-kube-api-access-4chfb\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.610018 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec23217-e99b-4b39-8be4-d4278275c14b-logs\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.610073 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4chfb\" (UniqueName: \"kubernetes.io/projected/5ec23217-e99b-4b39-8be4-d4278275c14b-kube-api-access-4chfb\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.610129 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-config-data\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.610179 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.610208 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.610634 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec23217-e99b-4b39-8be4-d4278275c14b-logs\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.614526 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.617936 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.618204 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-config-data\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.632660 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4chfb\" (UniqueName: \"kubernetes.io/projected/5ec23217-e99b-4b39-8be4-d4278275c14b-kube-api-access-4chfb\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.658801 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.711324 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" path="/var/lib/kubelet/pods/31265e58-52ac-4a6c-86b2-ec212e0ed318/volumes" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.712190 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" path="/var/lib/kubelet/pods/625e0821-44af-4965-aa51-75c1d5839e7c/volumes" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.713499 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68596d31-1da0-47aa-9330-179af16beee5" path="/var/lib/kubelet/pods/68596d31-1da0-47aa-9330-179af16beee5/volumes" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.880463 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.987947 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerStarted","Data":"8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89"} Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.987987 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerStarted","Data":"5a11214c736375d2dba7c27ccd6b9be4d089093f6403a91834a045bac0e3cf8d"} Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.993904 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"061bac4c-22ff-4144-b114-133ea89494c8","Type":"ContainerStarted","Data":"167b5165b4391c8783b551aad0df3cc918db35e3f8cb50ff81e948ca2a961b4f"} Feb 21 07:07:50 crc kubenswrapper[4820]: I0221 07:07:50.110047 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:50 crc kubenswrapper[4820]: W0221 07:07:50.113152 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ec23217_e99b_4b39_8be4_d4278275c14b.slice/crio-394bcc8e3a2a160440b4df43c43758efe0ea6054acb1691dc8c447d26df83e49 WatchSource:0}: Error finding container 394bcc8e3a2a160440b4df43c43758efe0ea6054acb1691dc8c447d26df83e49: Status 404 returned error can't find the container with id 394bcc8e3a2a160440b4df43c43758efe0ea6054acb1691dc8c447d26df83e49 Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.014332 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerStarted","Data":"e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873"} Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.017102 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec23217-e99b-4b39-8be4-d4278275c14b","Type":"ContainerStarted","Data":"a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e"} Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.017178 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec23217-e99b-4b39-8be4-d4278275c14b","Type":"ContainerStarted","Data":"4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d"} Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.017193 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec23217-e99b-4b39-8be4-d4278275c14b","Type":"ContainerStarted","Data":"394bcc8e3a2a160440b4df43c43758efe0ea6054acb1691dc8c447d26df83e49"} Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.023402 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"061bac4c-22ff-4144-b114-133ea89494c8","Type":"ContainerStarted","Data":"4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c"} Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.023571 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.036760 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.036744857 podStartE2EDuration="2.036744857s" podCreationTimestamp="2026-02-21 07:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:51.034157486 +0000 UTC m=+1246.067241674" watchObservedRunningTime="2026-02-21 07:07:51.036744857 +0000 UTC m=+1246.069829055" Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.058287 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.058271129 podStartE2EDuration="3.058271129s" podCreationTimestamp="2026-02-21 07:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:51.049709383 +0000 UTC m=+1246.082793591" watchObservedRunningTime="2026-02-21 07:07:51.058271129 +0000 UTC m=+1246.091355327" Feb 21 07:07:51 crc kubenswrapper[4820]: E0221 07:07:51.283224 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:07:51 crc kubenswrapper[4820]: E0221 07:07:51.284654 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:07:51 crc kubenswrapper[4820]: E0221 07:07:51.286158 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:07:51 crc kubenswrapper[4820]: E0221 07:07:51.286188 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="df035ce1-8e9b-4e72-a751-a56a7a2a613a" containerName="nova-scheduler-scheduler" Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.039190 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerStarted","Data":"3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b"} Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.043857 4820 generic.go:334] "Generic (PLEG): container finished" podID="df035ce1-8e9b-4e72-a751-a56a7a2a613a" containerID="fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128" exitCode=0 Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.044691 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"df035ce1-8e9b-4e72-a751-a56a7a2a613a","Type":"ContainerDied","Data":"fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128"} Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.357937 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.373441 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbz8z\" (UniqueName: \"kubernetes.io/projected/df035ce1-8e9b-4e72-a751-a56a7a2a613a-kube-api-access-gbz8z\") pod \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.373589 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-config-data\") pod \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.373695 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-combined-ca-bundle\") pod \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.383300 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df035ce1-8e9b-4e72-a751-a56a7a2a613a-kube-api-access-gbz8z" (OuterVolumeSpecName: "kube-api-access-gbz8z") pod "df035ce1-8e9b-4e72-a751-a56a7a2a613a" (UID: "df035ce1-8e9b-4e72-a751-a56a7a2a613a"). InnerVolumeSpecName "kube-api-access-gbz8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.423515 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df035ce1-8e9b-4e72-a751-a56a7a2a613a" (UID: "df035ce1-8e9b-4e72-a751-a56a7a2a613a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.424415 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-config-data" (OuterVolumeSpecName: "config-data") pod "df035ce1-8e9b-4e72-a751-a56a7a2a613a" (UID: "df035ce1-8e9b-4e72-a751-a56a7a2a613a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.478007 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbz8z\" (UniqueName: \"kubernetes.io/projected/df035ce1-8e9b-4e72-a751-a56a7a2a613a-kube-api-access-gbz8z\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.478336 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.478349 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.065487 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerStarted","Data":"e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89"} Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.065722 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.067813 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"df035ce1-8e9b-4e72-a751-a56a7a2a613a","Type":"ContainerDied","Data":"6e559911a5c0b4319322723a73f4f2e1a523f0fbef9ae966ae10c0602b1eb11b"} Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.067867 4820 scope.go:117] "RemoveContainer" containerID="fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.068088 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.111362 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.549987569 podStartE2EDuration="6.111342217s" podCreationTimestamp="2026-02-21 07:07:47 +0000 UTC" firstStartedPulling="2026-02-21 07:07:49.05505287 +0000 UTC m=+1244.088137068" lastFinishedPulling="2026-02-21 07:07:52.616407518 +0000 UTC m=+1247.649491716" observedRunningTime="2026-02-21 07:07:53.091839501 +0000 UTC m=+1248.124923699" watchObservedRunningTime="2026-02-21 07:07:53.111342217 +0000 UTC m=+1248.144426415" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.118206 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.132619 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.144182 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:53 crc kubenswrapper[4820]: E0221 07:07:53.144618 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df035ce1-8e9b-4e72-a751-a56a7a2a613a" containerName="nova-scheduler-scheduler" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.144633 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df035ce1-8e9b-4e72-a751-a56a7a2a613a" containerName="nova-scheduler-scheduler" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.144811 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="df035ce1-8e9b-4e72-a751-a56a7a2a613a" containerName="nova-scheduler-scheduler" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.145436 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.150421 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.163027 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.215886 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.215998 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp4xn\" (UniqueName: \"kubernetes.io/projected/ce77df06-566a-45b8-83f6-b788a3c81757-kube-api-access-gp4xn\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.216118 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-config-data\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.317382 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-config-data\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.317556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.318299 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp4xn\" (UniqueName: \"kubernetes.io/projected/ce77df06-566a-45b8-83f6-b788a3c81757-kube-api-access-gp4xn\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.330491 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.334255 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp4xn\" (UniqueName: \"kubernetes.io/projected/ce77df06-566a-45b8-83f6-b788a3c81757-kube-api-access-gp4xn\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.337094 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-config-data\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.468215 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.717733 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df035ce1-8e9b-4e72-a751-a56a7a2a613a" path="/var/lib/kubelet/pods/df035ce1-8e9b-4e72-a751-a56a7a2a613a/volumes" Feb 21 07:07:53 crc kubenswrapper[4820]: W0221 07:07:53.981146 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce77df06_566a_45b8_83f6_b788a3c81757.slice/crio-a44bc367b522a88e4a26e286fbe21e659f7f448ff1d55ea084262764e2d9e675 WatchSource:0}: Error finding container a44bc367b522a88e4a26e286fbe21e659f7f448ff1d55ea084262764e2d9e675: Status 404 returned error can't find the container with id a44bc367b522a88e4a26e286fbe21e659f7f448ff1d55ea084262764e2d9e675 Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.986057 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.078541 4820 generic.go:334] "Generic (PLEG): container finished" podID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerID="f7fd77b014ee72eca0be4a4c777ce16b6927f8e4f122356935b98249924cfad2" exitCode=0 Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.078683 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61942ced-fcab-4240-b49a-ff65cdeceb00","Type":"ContainerDied","Data":"f7fd77b014ee72eca0be4a4c777ce16b6927f8e4f122356935b98249924cfad2"} Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.078952 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61942ced-fcab-4240-b49a-ff65cdeceb00","Type":"ContainerDied","Data":"c40349a2af25367fc0c110fa968f40da829ddde9d2559551b284dfe24a879a9e"} Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.078964 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c40349a2af25367fc0c110fa968f40da829ddde9d2559551b284dfe24a879a9e" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.084057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce77df06-566a-45b8-83f6-b788a3c81757","Type":"ContainerStarted","Data":"a44bc367b522a88e4a26e286fbe21e659f7f448ff1d55ea084262764e2d9e675"} Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.086145 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.132782 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61942ced-fcab-4240-b49a-ff65cdeceb00-logs\") pod \"61942ced-fcab-4240-b49a-ff65cdeceb00\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.132837 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-combined-ca-bundle\") pod \"61942ced-fcab-4240-b49a-ff65cdeceb00\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.132908 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-config-data\") pod \"61942ced-fcab-4240-b49a-ff65cdeceb00\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.133038 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqpwm\" (UniqueName: \"kubernetes.io/projected/61942ced-fcab-4240-b49a-ff65cdeceb00-kube-api-access-pqpwm\") pod \"61942ced-fcab-4240-b49a-ff65cdeceb00\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.133322 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61942ced-fcab-4240-b49a-ff65cdeceb00-logs" (OuterVolumeSpecName: "logs") pod "61942ced-fcab-4240-b49a-ff65cdeceb00" (UID: "61942ced-fcab-4240-b49a-ff65cdeceb00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.133653 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61942ced-fcab-4240-b49a-ff65cdeceb00-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.136645 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61942ced-fcab-4240-b49a-ff65cdeceb00-kube-api-access-pqpwm" (OuterVolumeSpecName: "kube-api-access-pqpwm") pod "61942ced-fcab-4240-b49a-ff65cdeceb00" (UID: "61942ced-fcab-4240-b49a-ff65cdeceb00"). InnerVolumeSpecName "kube-api-access-pqpwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.167120 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-config-data" (OuterVolumeSpecName: "config-data") pod "61942ced-fcab-4240-b49a-ff65cdeceb00" (UID: "61942ced-fcab-4240-b49a-ff65cdeceb00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.169225 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61942ced-fcab-4240-b49a-ff65cdeceb00" (UID: "61942ced-fcab-4240-b49a-ff65cdeceb00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.236299 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqpwm\" (UniqueName: \"kubernetes.io/projected/61942ced-fcab-4240-b49a-ff65cdeceb00-kube-api-access-pqpwm\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.236333 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.236343 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.663008 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.664404 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.093050 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce77df06-566a-45b8-83f6-b788a3c81757","Type":"ContainerStarted","Data":"e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec"} Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.093107 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.117339 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.117326222 podStartE2EDuration="2.117326222s" podCreationTimestamp="2026-02-21 07:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:55.11291918 +0000 UTC m=+1250.146003378" watchObservedRunningTime="2026-02-21 07:07:55.117326222 +0000 UTC m=+1250.150410420" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.141143 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.148409 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.176274 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:55 crc kubenswrapper[4820]: E0221 07:07:55.176620 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-log" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.176635 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-log" Feb 21 07:07:55 crc kubenswrapper[4820]: E0221 07:07:55.176653 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-api" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.176660 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-api" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.176829 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-log" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.176845 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-api" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.177779 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.181672 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.194140 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.270865 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-config-data\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.270939 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.271087 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d0bd-7674-4b6d-8e43-e199356ee168-logs\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.271257 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stcb7\" (UniqueName: \"kubernetes.io/projected/bdf0d0bd-7674-4b6d-8e43-e199356ee168-kube-api-access-stcb7\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.373039 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-config-data\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.373148 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.373211 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d0bd-7674-4b6d-8e43-e199356ee168-logs\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.373300 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stcb7\" (UniqueName: \"kubernetes.io/projected/bdf0d0bd-7674-4b6d-8e43-e199356ee168-kube-api-access-stcb7\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.373754 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d0bd-7674-4b6d-8e43-e199356ee168-logs\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.377575 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.377580 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-config-data\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.399345 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stcb7\" (UniqueName: \"kubernetes.io/projected/bdf0d0bd-7674-4b6d-8e43-e199356ee168-kube-api-access-stcb7\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.531413 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.735083 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" path="/var/lib/kubelet/pods/61942ced-fcab-4240-b49a-ff65cdeceb00/volumes" Feb 21 07:07:56 crc kubenswrapper[4820]: I0221 07:07:56.014750 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:56 crc kubenswrapper[4820]: I0221 07:07:56.114405 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdf0d0bd-7674-4b6d-8e43-e199356ee168","Type":"ContainerStarted","Data":"c093ecf877a602a1aabdc4e519a2734f4375ca978ae6815cef185d7613344b66"} Feb 21 07:07:57 crc kubenswrapper[4820]: I0221 07:07:57.123933 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdf0d0bd-7674-4b6d-8e43-e199356ee168","Type":"ContainerStarted","Data":"b8c2b5c4ec32625c6a5a30f50d7d06894d0b3ce572fc22fbec2a33bae4f32c76"} Feb 21 07:07:57 crc kubenswrapper[4820]: I0221 07:07:57.124323 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdf0d0bd-7674-4b6d-8e43-e199356ee168","Type":"ContainerStarted","Data":"33149263d791a43228a6de8d4d236b3ce924b0d31851e00fcc0b14be3b0951c9"} Feb 21 07:07:57 crc kubenswrapper[4820]: I0221 07:07:57.155121 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.155098359 podStartE2EDuration="2.155098359s" podCreationTimestamp="2026-02-21 07:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:57.146989046 +0000 UTC m=+1252.180073244" watchObservedRunningTime="2026-02-21 07:07:57.155098359 +0000 UTC m=+1252.188182557" Feb 21 07:07:58 crc kubenswrapper[4820]: I0221 07:07:58.468486 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 21 07:07:59 crc kubenswrapper[4820]: I0221 07:07:59.432581 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:59 crc kubenswrapper[4820]: I0221 07:07:59.662439 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 07:07:59 crc kubenswrapper[4820]: I0221 07:07:59.662747 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 07:08:00 crc kubenswrapper[4820]: I0221 07:08:00.673421 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:00 crc kubenswrapper[4820]: I0221 07:08:00.673426 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:03 crc kubenswrapper[4820]: I0221 07:08:03.469911 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 21 07:08:03 crc kubenswrapper[4820]: I0221 07:08:03.500832 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 21 07:08:04 crc kubenswrapper[4820]: I0221 07:08:04.222009 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 21 07:08:05 crc kubenswrapper[4820]: I0221 07:08:05.533698 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 07:08:05 crc kubenswrapper[4820]: I0221 07:08:05.533997 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 07:08:06 crc kubenswrapper[4820]: I0221 07:08:06.616382 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:06 crc kubenswrapper[4820]: I0221 07:08:06.616398 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:09 crc kubenswrapper[4820]: I0221 07:08:09.667658 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 07:08:09 crc kubenswrapper[4820]: I0221 07:08:09.668301 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 07:08:09 crc kubenswrapper[4820]: I0221 07:08:09.676366 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 07:08:09 crc kubenswrapper[4820]: I0221 07:08:09.676860 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.264030 4820 generic.go:334] "Generic (PLEG): container finished" podID="24fcfcd7-30d6-4101-af31-619b24afcb8d" containerID="8f1053354930657be13a47d1867923e155692b07e230c8c0cef421265cc3f890" exitCode=137 Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.264144 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24fcfcd7-30d6-4101-af31-619b24afcb8d","Type":"ContainerDied","Data":"8f1053354930657be13a47d1867923e155692b07e230c8c0cef421265cc3f890"} Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.264413 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24fcfcd7-30d6-4101-af31-619b24afcb8d","Type":"ContainerDied","Data":"d0bbd16e326afec3eb0db3b65db6e116903c8b3a5f97fc7f0031dfc181db09dc"} Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.264429 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0bbd16e326afec3eb0db3b65db6e116903c8b3a5f97fc7f0031dfc181db09dc" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.282391 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.426494 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s75qh\" (UniqueName: \"kubernetes.io/projected/24fcfcd7-30d6-4101-af31-619b24afcb8d-kube-api-access-s75qh\") pod \"24fcfcd7-30d6-4101-af31-619b24afcb8d\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.426639 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-combined-ca-bundle\") pod \"24fcfcd7-30d6-4101-af31-619b24afcb8d\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.427099 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-config-data\") pod \"24fcfcd7-30d6-4101-af31-619b24afcb8d\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.432501 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24fcfcd7-30d6-4101-af31-619b24afcb8d-kube-api-access-s75qh" (OuterVolumeSpecName: "kube-api-access-s75qh") pod "24fcfcd7-30d6-4101-af31-619b24afcb8d" (UID: "24fcfcd7-30d6-4101-af31-619b24afcb8d"). InnerVolumeSpecName "kube-api-access-s75qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.452134 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-config-data" (OuterVolumeSpecName: "config-data") pod "24fcfcd7-30d6-4101-af31-619b24afcb8d" (UID: "24fcfcd7-30d6-4101-af31-619b24afcb8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.454654 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24fcfcd7-30d6-4101-af31-619b24afcb8d" (UID: "24fcfcd7-30d6-4101-af31-619b24afcb8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.528920 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s75qh\" (UniqueName: \"kubernetes.io/projected/24fcfcd7-30d6-4101-af31-619b24afcb8d-kube-api-access-s75qh\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.528952 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.528962 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.272903 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.298757 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.322529 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.341007 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:08:12 crc kubenswrapper[4820]: E0221 07:08:12.341478 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24fcfcd7-30d6-4101-af31-619b24afcb8d" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.341497 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="24fcfcd7-30d6-4101-af31-619b24afcb8d" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.341682 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="24fcfcd7-30d6-4101-af31-619b24afcb8d" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.342367 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.344370 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.344692 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.344712 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.352113 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.446317 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.446381 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.447047 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.447179 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.447510 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d9n5\" (UniqueName: \"kubernetes.io/projected/7b1db760-d9fc-477f-bc0b-8119d247253b-kube-api-access-9d9n5\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.548976 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.549079 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d9n5\" (UniqueName: \"kubernetes.io/projected/7b1db760-d9fc-477f-bc0b-8119d247253b-kube-api-access-9d9n5\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.549423 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.549478 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.549526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.552964 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.553352 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.554280 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.554645 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.568014 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d9n5\" (UniqueName: \"kubernetes.io/projected/7b1db760-d9fc-477f-bc0b-8119d247253b-kube-api-access-9d9n5\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.681766 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.162460 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:08:13 crc kubenswrapper[4820]: W0221 07:08:13.168850 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b1db760_d9fc_477f_bc0b_8119d247253b.slice/crio-6bf41331dc2f0220a2dc121fea5728deaea6c17ccff16b3eb94fe490cf7810ff WatchSource:0}: Error finding container 6bf41331dc2f0220a2dc121fea5728deaea6c17ccff16b3eb94fe490cf7810ff: Status 404 returned error can't find the container with id 6bf41331dc2f0220a2dc121fea5728deaea6c17ccff16b3eb94fe490cf7810ff Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.283749 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b1db760-d9fc-477f-bc0b-8119d247253b","Type":"ContainerStarted","Data":"6bf41331dc2f0220a2dc121fea5728deaea6c17ccff16b3eb94fe490cf7810ff"} Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.708028 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24fcfcd7-30d6-4101-af31-619b24afcb8d" path="/var/lib/kubelet/pods/24fcfcd7-30d6-4101-af31-619b24afcb8d/volumes" Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.815608 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.815889 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.815926 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.816562 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c99eabcd7cdc00f7af4fa074914b442d7ae5de65041a878335f0f81531e57443"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.816616 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://c99eabcd7cdc00f7af4fa074914b442d7ae5de65041a878335f0f81531e57443" gracePeriod=600 Feb 21 07:08:14 crc kubenswrapper[4820]: I0221 07:08:14.293775 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b1db760-d9fc-477f-bc0b-8119d247253b","Type":"ContainerStarted","Data":"24941eaa5fcba668b44518933915d73aa568096044e3c4ed1b1d3b36fe63bafd"} Feb 21 07:08:14 crc kubenswrapper[4820]: I0221 07:08:14.297748 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="c99eabcd7cdc00f7af4fa074914b442d7ae5de65041a878335f0f81531e57443" exitCode=0 Feb 21 07:08:14 crc kubenswrapper[4820]: I0221 07:08:14.297791 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"c99eabcd7cdc00f7af4fa074914b442d7ae5de65041a878335f0f81531e57443"} Feb 21 07:08:14 crc kubenswrapper[4820]: I0221 07:08:14.297812 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"382dbabbc108418e0159c4f962ec6351f7f55d31b6d9ca634247ee411e9ee6e0"} Feb 21 07:08:14 crc kubenswrapper[4820]: I0221 07:08:14.297830 4820 scope.go:117] "RemoveContainer" containerID="a0f682e2000efd774d622a3e32ffb0bf77aef757862422932fda82c7a3e96b5c" Feb 21 07:08:14 crc kubenswrapper[4820]: I0221 07:08:14.317998 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.317972528 podStartE2EDuration="2.317972528s" podCreationTimestamp="2026-02-21 07:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:08:14.308390025 +0000 UTC m=+1269.341474243" watchObservedRunningTime="2026-02-21 07:08:14.317972528 +0000 UTC m=+1269.351056726" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.536593 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.536964 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.537284 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.537320 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.539764 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.540495 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.725572 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-zb9jc"] Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.727020 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.743018 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-zb9jc"] Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.919678 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-swift-storage-0\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.919983 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7dh7\" (UniqueName: \"kubernetes.io/projected/dc228462-9ac8-475c-859b-bbce5678a5ea-kube-api-access-c7dh7\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.920158 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-sb\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.920234 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-nb\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.920324 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-config\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.920433 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-svc\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.021745 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-sb\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.021810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-nb\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.021844 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-config\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.021910 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-svc\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.021972 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-swift-storage-0\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.022020 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7dh7\" (UniqueName: \"kubernetes.io/projected/dc228462-9ac8-475c-859b-bbce5678a5ea-kube-api-access-c7dh7\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.022902 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-config\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.022902 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-sb\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.022902 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-nb\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.023071 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-swift-storage-0\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.023171 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-svc\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.042567 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7dh7\" (UniqueName: \"kubernetes.io/projected/dc228462-9ac8-475c-859b-bbce5678a5ea-kube-api-access-c7dh7\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.044324 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.537283 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-zb9jc"] Feb 21 07:08:16 crc kubenswrapper[4820]: W0221 07:08:16.539151 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc228462_9ac8_475c_859b_bbce5678a5ea.slice/crio-d947f700f97ac52cf725cd42cbfb548fb57f713a94e9ecafa0ce141427736451 WatchSource:0}: Error finding container d947f700f97ac52cf725cd42cbfb548fb57f713a94e9ecafa0ce141427736451: Status 404 returned error can't find the container with id d947f700f97ac52cf725cd42cbfb548fb57f713a94e9ecafa0ce141427736451 Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.329571 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerID="c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61" exitCode=0 Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.329624 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" event={"ID":"dc228462-9ac8-475c-859b-bbce5678a5ea","Type":"ContainerDied","Data":"c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61"} Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.329872 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" event={"ID":"dc228462-9ac8-475c-859b-bbce5678a5ea","Type":"ContainerStarted","Data":"d947f700f97ac52cf725cd42cbfb548fb57f713a94e9ecafa0ce141427736451"} Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.681941 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.882126 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.882584 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-central-agent" containerID="cri-o://8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89" gracePeriod=30 Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.882676 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="sg-core" containerID="cri-o://3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b" gracePeriod=30 Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.882725 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="proxy-httpd" containerID="cri-o://e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89" gracePeriod=30 Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.882692 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-notification-agent" containerID="cri-o://e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873" gracePeriod=30 Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.895947 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.195:3000/\": EOF" Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.343283 4820 generic.go:334] "Generic (PLEG): container finished" podID="650275e2-1f20-427a-89b7-de2c084d3b40" containerID="e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89" exitCode=0 Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.343613 4820 generic.go:334] "Generic (PLEG): container finished" podID="650275e2-1f20-427a-89b7-de2c084d3b40" containerID="3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b" exitCode=2 Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.343626 4820 generic.go:334] "Generic (PLEG): container finished" podID="650275e2-1f20-427a-89b7-de2c084d3b40" containerID="8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89" exitCode=0 Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.343331 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerDied","Data":"e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89"} Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.343711 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerDied","Data":"3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b"} Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.343727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerDied","Data":"8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89"} Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.346302 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" event={"ID":"dc228462-9ac8-475c-859b-bbce5678a5ea","Type":"ContainerStarted","Data":"6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847"} Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.346454 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.368952 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" podStartSLOduration=3.368932778 podStartE2EDuration="3.368932778s" podCreationTimestamp="2026-02-21 07:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:08:18.362482401 +0000 UTC m=+1273.395566609" watchObservedRunningTime="2026-02-21 07:08:18.368932778 +0000 UTC m=+1273.402016996" Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.415294 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.195:3000/\": dial tcp 10.217.0.195:3000: connect: connection refused" Feb 21 07:08:19 crc kubenswrapper[4820]: I0221 07:08:19.017573 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:19 crc kubenswrapper[4820]: I0221 07:08:19.018124 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-log" containerID="cri-o://33149263d791a43228a6de8d4d236b3ce924b0d31851e00fcc0b14be3b0951c9" gracePeriod=30 Feb 21 07:08:19 crc kubenswrapper[4820]: I0221 07:08:19.018278 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-api" containerID="cri-o://b8c2b5c4ec32625c6a5a30f50d7d06894d0b3ce572fc22fbec2a33bae4f32c76" gracePeriod=30 Feb 21 07:08:19 crc kubenswrapper[4820]: I0221 07:08:19.355701 4820 generic.go:334] "Generic (PLEG): container finished" podID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerID="33149263d791a43228a6de8d4d236b3ce924b0d31851e00fcc0b14be3b0951c9" exitCode=143 Feb 21 07:08:19 crc kubenswrapper[4820]: I0221 07:08:19.355740 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdf0d0bd-7674-4b6d-8e43-e199356ee168","Type":"ContainerDied","Data":"33149263d791a43228a6de8d4d236b3ce924b0d31851e00fcc0b14be3b0951c9"} Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.384832 4820 generic.go:334] "Generic (PLEG): container finished" podID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerID="b8c2b5c4ec32625c6a5a30f50d7d06894d0b3ce572fc22fbec2a33bae4f32c76" exitCode=0 Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.384909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdf0d0bd-7674-4b6d-8e43-e199356ee168","Type":"ContainerDied","Data":"b8c2b5c4ec32625c6a5a30f50d7d06894d0b3ce572fc22fbec2a33bae4f32c76"} Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.647582 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.682281 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.724463 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.759254 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d0bd-7674-4b6d-8e43-e199356ee168-logs\") pod \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.759412 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stcb7\" (UniqueName: \"kubernetes.io/projected/bdf0d0bd-7674-4b6d-8e43-e199356ee168-kube-api-access-stcb7\") pod \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.759473 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-config-data\") pod \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.759525 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-combined-ca-bundle\") pod \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.760830 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf0d0bd-7674-4b6d-8e43-e199356ee168-logs" (OuterVolumeSpecName: "logs") pod "bdf0d0bd-7674-4b6d-8e43-e199356ee168" (UID: "bdf0d0bd-7674-4b6d-8e43-e199356ee168"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.785720 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf0d0bd-7674-4b6d-8e43-e199356ee168-kube-api-access-stcb7" (OuterVolumeSpecName: "kube-api-access-stcb7") pod "bdf0d0bd-7674-4b6d-8e43-e199356ee168" (UID: "bdf0d0bd-7674-4b6d-8e43-e199356ee168"). InnerVolumeSpecName "kube-api-access-stcb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.805973 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdf0d0bd-7674-4b6d-8e43-e199356ee168" (UID: "bdf0d0bd-7674-4b6d-8e43-e199356ee168"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.813652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-config-data" (OuterVolumeSpecName: "config-data") pod "bdf0d0bd-7674-4b6d-8e43-e199356ee168" (UID: "bdf0d0bd-7674-4b6d-8e43-e199356ee168"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.824164 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.861868 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stcb7\" (UniqueName: \"kubernetes.io/projected/bdf0d0bd-7674-4b6d-8e43-e199356ee168-kube-api-access-stcb7\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.861913 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.861928 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.861940 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d0bd-7674-4b6d-8e43-e199356ee168-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.963813 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-combined-ca-bundle\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.964123 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7pwm\" (UniqueName: \"kubernetes.io/projected/650275e2-1f20-427a-89b7-de2c084d3b40-kube-api-access-v7pwm\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.964223 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-scripts\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.964269 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-run-httpd\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.964318 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.964336 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-log-httpd\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.964431 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-sg-core-conf-yaml\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.964920 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.965766 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.965993 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.966067 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.971418 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-scripts" (OuterVolumeSpecName: "scripts") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.972336 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650275e2-1f20-427a-89b7-de2c084d3b40-kube-api-access-v7pwm" (OuterVolumeSpecName: "kube-api-access-v7pwm") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "kube-api-access-v7pwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.004354 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.055814 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.066268 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data" (OuterVolumeSpecName: "config-data") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.066832 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:23 crc kubenswrapper[4820]: W0221 07:08:23.066904 4820 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/650275e2-1f20-427a-89b7-de2c084d3b40/volumes/kubernetes.io~secret/config-data Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.066922 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data" (OuterVolumeSpecName: "config-data") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.067322 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.067338 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.067346 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.067355 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.067363 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7pwm\" (UniqueName: \"kubernetes.io/projected/650275e2-1f20-427a-89b7-de2c084d3b40-kube-api-access-v7pwm\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.396144 4820 generic.go:334] "Generic (PLEG): container finished" podID="650275e2-1f20-427a-89b7-de2c084d3b40" containerID="e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873" exitCode=0 Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.396224 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.396219 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerDied","Data":"e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873"} Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.396343 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerDied","Data":"5a11214c736375d2dba7c27ccd6b9be4d089093f6403a91834a045bac0e3cf8d"} Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.396363 4820 scope.go:117] "RemoveContainer" containerID="e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.399084 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.401377 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdf0d0bd-7674-4b6d-8e43-e199356ee168","Type":"ContainerDied","Data":"c093ecf877a602a1aabdc4e519a2734f4375ca978ae6815cef185d7613344b66"} Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.418011 4820 scope.go:117] "RemoveContainer" containerID="3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.422628 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.439002 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.450676 4820 scope.go:117] "RemoveContainer" containerID="e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.460781 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.473291 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.489757 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.490264 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-notification-agent" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490289 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-notification-agent" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.490310 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="sg-core" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490322 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="sg-core" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.490346 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-api" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490355 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-api" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.490374 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-central-agent" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490384 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-central-agent" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.490413 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-log" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490421 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-log" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.490432 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="proxy-httpd" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490439 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="proxy-httpd" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490643 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-central-agent" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490672 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="sg-core" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490688 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="proxy-httpd" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490700 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-log" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490721 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-api" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490731 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-notification-agent" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.491983 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.493449 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.503156 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.503412 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.503552 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.508679 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.508829 4820 scope.go:117] "RemoveContainer" containerID="8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.518637 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.521740 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.530021 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.530577 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.539462 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.540492 4820 scope.go:117] "RemoveContainer" containerID="e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.542170 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89\": container with ID starting with e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89 not found: ID does not exist" containerID="e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.542325 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89"} err="failed to get container status \"e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89\": rpc error: code = NotFound desc = could not find container \"e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89\": container with ID starting with e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89 not found: ID does not exist" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.543402 4820 scope.go:117] "RemoveContainer" containerID="3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.544015 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b\": container with ID starting with 3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b not found: ID does not exist" containerID="3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.544078 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b"} err="failed to get container status \"3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b\": rpc error: code = NotFound desc = could not find container \"3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b\": container with ID starting with 3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b not found: ID does not exist" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.544116 4820 scope.go:117] "RemoveContainer" containerID="e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.553910 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873\": container with ID starting with e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873 not found: ID does not exist" containerID="e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.553969 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873"} err="failed to get container status \"e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873\": rpc error: code = NotFound desc = could not find container \"e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873\": container with ID starting with e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873 not found: ID does not exist" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.554004 4820 scope.go:117] "RemoveContainer" containerID="8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.554514 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89\": container with ID starting with 8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89 not found: ID does not exist" containerID="8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.554543 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89"} err="failed to get container status \"8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89\": rpc error: code = NotFound desc = could not find container \"8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89\": container with ID starting with 8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89 not found: ID does not exist" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.554563 4820 scope.go:117] "RemoveContainer" containerID="b8c2b5c4ec32625c6a5a30f50d7d06894d0b3ce572fc22fbec2a33bae4f32c76" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.572339 4820 scope.go:117] "RemoveContainer" containerID="33149263d791a43228a6de8d4d236b3ce924b0d31851e00fcc0b14be3b0951c9" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.575524 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-config-data\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.575555 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qbqd\" (UniqueName: \"kubernetes.io/projected/f10708bc-02ad-4956-95a6-ae03aa172988-kube-api-access-4qbqd\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.575579 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.575717 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10708bc-02ad-4956-95a6-ae03aa172988-logs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.575740 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.575759 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-public-tls-certs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678223 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-scripts\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678303 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzf7k\" (UniqueName: \"kubernetes.io/projected/118c08af-2bde-440a-a9cf-ad089288aae6-kube-api-access-zzf7k\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678370 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10708bc-02ad-4956-95a6-ae03aa172988-logs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678412 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678438 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-public-tls-certs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678497 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-config-data\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678521 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qbqd\" (UniqueName: \"kubernetes.io/projected/f10708bc-02ad-4956-95a6-ae03aa172988-kube-api-access-4qbqd\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678546 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678634 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678669 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-run-httpd\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678695 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-log-httpd\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678742 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678764 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-config-data\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.679544 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10708bc-02ad-4956-95a6-ae03aa172988-logs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.684523 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-k9s8t"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.687009 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.693493 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.693709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-config-data\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.694073 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-public-tls-certs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.694417 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.694589 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.709414 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.709959 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qbqd\" (UniqueName: \"kubernetes.io/projected/f10708bc-02ad-4956-95a6-ae03aa172988-kube-api-access-4qbqd\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.712674 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" path="/var/lib/kubelet/pods/650275e2-1f20-427a-89b7-de2c084d3b40/volumes" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.713642 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" path="/var/lib/kubelet/pods/bdf0d0bd-7674-4b6d-8e43-e199356ee168/volumes" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.714384 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k9s8t"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780142 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-config-data\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780272 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780352 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf84g\" (UniqueName: \"kubernetes.io/projected/06da7378-1c64-43e9-8d97-63a92fe503fc-kube-api-access-bf84g\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780395 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780428 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-run-httpd\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780461 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-log-httpd\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780492 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-scripts\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780544 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780567 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-config-data\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780723 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-scripts\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780749 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzf7k\" (UniqueName: \"kubernetes.io/projected/118c08af-2bde-440a-a9cf-ad089288aae6-kube-api-access-zzf7k\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.781355 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-run-httpd\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.781588 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-log-httpd\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.785953 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.787406 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-scripts\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.787798 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.793994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-config-data\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.801225 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzf7k\" (UniqueName: \"kubernetes.io/projected/118c08af-2bde-440a-a9cf-ad089288aae6-kube-api-access-zzf7k\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.835861 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.854797 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.882608 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-config-data\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.882686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.882745 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf84g\" (UniqueName: \"kubernetes.io/projected/06da7378-1c64-43e9-8d97-63a92fe503fc-kube-api-access-bf84g\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.882794 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-scripts\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.887516 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.888584 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-config-data\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.889066 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-scripts\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.901150 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf84g\" (UniqueName: \"kubernetes.io/projected/06da7378-1c64-43e9-8d97-63a92fe503fc-kube-api-access-bf84g\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:24 crc kubenswrapper[4820]: I0221 07:08:24.180965 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:24 crc kubenswrapper[4820]: I0221 07:08:24.327536 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:24 crc kubenswrapper[4820]: W0221 07:08:24.401969 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod118c08af_2bde_440a_a9cf_ad089288aae6.slice/crio-a3ebef633d35f88845de4a2b21cdc59126d4fd49e7842d92e5d5e3974ce0962e WatchSource:0}: Error finding container a3ebef633d35f88845de4a2b21cdc59126d4fd49e7842d92e5d5e3974ce0962e: Status 404 returned error can't find the container with id a3ebef633d35f88845de4a2b21cdc59126d4fd49e7842d92e5d5e3974ce0962e Feb 21 07:08:24 crc kubenswrapper[4820]: I0221 07:08:24.404626 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:08:24 crc kubenswrapper[4820]: I0221 07:08:24.421734 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f10708bc-02ad-4956-95a6-ae03aa172988","Type":"ContainerStarted","Data":"d8c35a16f23800f75c01941a376d91e2f3aecbf3120d50dff54c29623c837dbf"} Feb 21 07:08:24 crc kubenswrapper[4820]: I0221 07:08:24.641385 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k9s8t"] Feb 21 07:08:25 crc kubenswrapper[4820]: I0221 07:08:25.444860 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerStarted","Data":"a3ebef633d35f88845de4a2b21cdc59126d4fd49e7842d92e5d5e3974ce0962e"} Feb 21 07:08:25 crc kubenswrapper[4820]: I0221 07:08:25.448090 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f10708bc-02ad-4956-95a6-ae03aa172988","Type":"ContainerStarted","Data":"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b"} Feb 21 07:08:25 crc kubenswrapper[4820]: I0221 07:08:25.448134 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f10708bc-02ad-4956-95a6-ae03aa172988","Type":"ContainerStarted","Data":"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e"} Feb 21 07:08:25 crc kubenswrapper[4820]: I0221 07:08:25.453007 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k9s8t" event={"ID":"06da7378-1c64-43e9-8d97-63a92fe503fc","Type":"ContainerStarted","Data":"ab7e68ddc2356c6ae5d0b5f7f63da545c73754b32e149e02621025d7c3d10d36"} Feb 21 07:08:25 crc kubenswrapper[4820]: I0221 07:08:25.453046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k9s8t" event={"ID":"06da7378-1c64-43e9-8d97-63a92fe503fc","Type":"ContainerStarted","Data":"04a5336f0b77ca4614d6925d537b6faf02dd14a95bdd172301cd6fd45ae3d852"} Feb 21 07:08:25 crc kubenswrapper[4820]: I0221 07:08:25.490426 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-k9s8t" podStartSLOduration=2.490410251 podStartE2EDuration="2.490410251s" podCreationTimestamp="2026-02-21 07:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:08:25.486601987 +0000 UTC m=+1280.519686185" watchObservedRunningTime="2026-02-21 07:08:25.490410251 +0000 UTC m=+1280.523494439" Feb 21 07:08:25 crc kubenswrapper[4820]: I0221 07:08:25.494310 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.494299698 podStartE2EDuration="2.494299698s" podCreationTimestamp="2026-02-21 07:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:08:25.470850044 +0000 UTC m=+1280.503934262" watchObservedRunningTime="2026-02-21 07:08:25.494299698 +0000 UTC m=+1280.527383896" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.046335 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.140322 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-pwg2d"] Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.140899 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerName="dnsmasq-dns" containerID="cri-o://9b56ec3e0ab84221e159324991d7abf3d8befbacabd9ffbd2b2a9e9b5dadad70" gracePeriod=10 Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.489065 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerStarted","Data":"986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5"} Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.489406 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerStarted","Data":"131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d"} Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.492192 4820 generic.go:334] "Generic (PLEG): container finished" podID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerID="9b56ec3e0ab84221e159324991d7abf3d8befbacabd9ffbd2b2a9e9b5dadad70" exitCode=0 Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.492898 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" event={"ID":"bc801035-b5e1-4e87-b8a1-c1d9474466c5","Type":"ContainerDied","Data":"9b56ec3e0ab84221e159324991d7abf3d8befbacabd9ffbd2b2a9e9b5dadad70"} Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.742604 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.842224 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz8qq\" (UniqueName: \"kubernetes.io/projected/bc801035-b5e1-4e87-b8a1-c1d9474466c5-kube-api-access-tz8qq\") pod \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.842341 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-config\") pod \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.842392 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-sb\") pod \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.842434 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-swift-storage-0\") pod \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.842465 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-svc\") pod \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.842488 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-nb\") pod \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.858777 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc801035-b5e1-4e87-b8a1-c1d9474466c5-kube-api-access-tz8qq" (OuterVolumeSpecName: "kube-api-access-tz8qq") pod "bc801035-b5e1-4e87-b8a1-c1d9474466c5" (UID: "bc801035-b5e1-4e87-b8a1-c1d9474466c5"). InnerVolumeSpecName "kube-api-access-tz8qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.902436 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-config" (OuterVolumeSpecName: "config") pod "bc801035-b5e1-4e87-b8a1-c1d9474466c5" (UID: "bc801035-b5e1-4e87-b8a1-c1d9474466c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.910125 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc801035-b5e1-4e87-b8a1-c1d9474466c5" (UID: "bc801035-b5e1-4e87-b8a1-c1d9474466c5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.919682 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc801035-b5e1-4e87-b8a1-c1d9474466c5" (UID: "bc801035-b5e1-4e87-b8a1-c1d9474466c5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.921005 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc801035-b5e1-4e87-b8a1-c1d9474466c5" (UID: "bc801035-b5e1-4e87-b8a1-c1d9474466c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.932010 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc801035-b5e1-4e87-b8a1-c1d9474466c5" (UID: "bc801035-b5e1-4e87-b8a1-c1d9474466c5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.944623 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz8qq\" (UniqueName: \"kubernetes.io/projected/bc801035-b5e1-4e87-b8a1-c1d9474466c5-kube-api-access-tz8qq\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.944655 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.944667 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.944675 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.944683 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.944690 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.502754 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerStarted","Data":"aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46"} Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.504868 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" event={"ID":"bc801035-b5e1-4e87-b8a1-c1d9474466c5","Type":"ContainerDied","Data":"56706ef7edc4c45f0fb9cf68555159ed6f3a3b2712a13f674db70d52356a6d75"} Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.504907 4820 scope.go:117] "RemoveContainer" containerID="9b56ec3e0ab84221e159324991d7abf3d8befbacabd9ffbd2b2a9e9b5dadad70" Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.505020 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.539163 4820 scope.go:117] "RemoveContainer" containerID="5f5ab8e6435ddfdb4e8c77819cee3cfc2fa9fc05ae6a9ae155da8503f7b0f636" Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.561312 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-pwg2d"] Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.579519 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-pwg2d"] Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.708252 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" path="/var/lib/kubelet/pods/bc801035-b5e1-4e87-b8a1-c1d9474466c5/volumes" Feb 21 07:08:29 crc kubenswrapper[4820]: I0221 07:08:29.523806 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerStarted","Data":"f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff"} Feb 21 07:08:29 crc kubenswrapper[4820]: I0221 07:08:29.524454 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:08:29 crc kubenswrapper[4820]: I0221 07:08:29.550607 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.658710976 podStartE2EDuration="6.550589805s" podCreationTimestamp="2026-02-21 07:08:23 +0000 UTC" firstStartedPulling="2026-02-21 07:08:24.41555404 +0000 UTC m=+1279.448638238" lastFinishedPulling="2026-02-21 07:08:28.307432869 +0000 UTC m=+1283.340517067" observedRunningTime="2026-02-21 07:08:29.543375536 +0000 UTC m=+1284.576459754" watchObservedRunningTime="2026-02-21 07:08:29.550589805 +0000 UTC m=+1284.583674003" Feb 21 07:08:30 crc kubenswrapper[4820]: I0221 07:08:30.537494 4820 generic.go:334] "Generic (PLEG): container finished" podID="06da7378-1c64-43e9-8d97-63a92fe503fc" containerID="ab7e68ddc2356c6ae5d0b5f7f63da545c73754b32e149e02621025d7c3d10d36" exitCode=0 Feb 21 07:08:30 crc kubenswrapper[4820]: I0221 07:08:30.537618 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k9s8t" event={"ID":"06da7378-1c64-43e9-8d97-63a92fe503fc","Type":"ContainerDied","Data":"ab7e68ddc2356c6ae5d0b5f7f63da545c73754b32e149e02621025d7c3d10d36"} Feb 21 07:08:31 crc kubenswrapper[4820]: I0221 07:08:31.439682 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.192:5353: i/o timeout" Feb 21 07:08:31 crc kubenswrapper[4820]: I0221 07:08:31.914395 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.043716 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-config-data\") pod \"06da7378-1c64-43e9-8d97-63a92fe503fc\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.043840 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf84g\" (UniqueName: \"kubernetes.io/projected/06da7378-1c64-43e9-8d97-63a92fe503fc-kube-api-access-bf84g\") pod \"06da7378-1c64-43e9-8d97-63a92fe503fc\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.043892 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-scripts\") pod \"06da7378-1c64-43e9-8d97-63a92fe503fc\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.043951 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-combined-ca-bundle\") pod \"06da7378-1c64-43e9-8d97-63a92fe503fc\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.049961 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-scripts" (OuterVolumeSpecName: "scripts") pod "06da7378-1c64-43e9-8d97-63a92fe503fc" (UID: "06da7378-1c64-43e9-8d97-63a92fe503fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.053609 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06da7378-1c64-43e9-8d97-63a92fe503fc-kube-api-access-bf84g" (OuterVolumeSpecName: "kube-api-access-bf84g") pod "06da7378-1c64-43e9-8d97-63a92fe503fc" (UID: "06da7378-1c64-43e9-8d97-63a92fe503fc"). InnerVolumeSpecName "kube-api-access-bf84g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.073503 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-config-data" (OuterVolumeSpecName: "config-data") pod "06da7378-1c64-43e9-8d97-63a92fe503fc" (UID: "06da7378-1c64-43e9-8d97-63a92fe503fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.076791 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06da7378-1c64-43e9-8d97-63a92fe503fc" (UID: "06da7378-1c64-43e9-8d97-63a92fe503fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.146691 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf84g\" (UniqueName: \"kubernetes.io/projected/06da7378-1c64-43e9-8d97-63a92fe503fc-kube-api-access-bf84g\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.146722 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.146731 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.146739 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.555083 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k9s8t" event={"ID":"06da7378-1c64-43e9-8d97-63a92fe503fc","Type":"ContainerDied","Data":"04a5336f0b77ca4614d6925d537b6faf02dd14a95bdd172301cd6fd45ae3d852"} Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.555382 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04a5336f0b77ca4614d6925d537b6faf02dd14a95bdd172301cd6fd45ae3d852" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.555139 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.744343 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.744547 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ce77df06-566a-45b8-83f6-b788a3c81757" containerName="nova-scheduler-scheduler" containerID="cri-o://e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec" gracePeriod=30 Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.795525 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.795814 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-log" containerID="cri-o://0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e" gracePeriod=30 Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.796396 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-api" containerID="cri-o://c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b" gracePeriod=30 Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.807685 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.807911 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-log" containerID="cri-o://4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d" gracePeriod=30 Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.808331 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-metadata" containerID="cri-o://a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e" gracePeriod=30 Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.456940 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.471333 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.472919 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.474412 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.474474 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ce77df06-566a-45b8-83f6-b788a3c81757" containerName="nova-scheduler-scheduler" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.564411 4820 generic.go:334] "Generic (PLEG): container finished" podID="f10708bc-02ad-4956-95a6-ae03aa172988" containerID="c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b" exitCode=0 Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.564442 4820 generic.go:334] "Generic (PLEG): container finished" podID="f10708bc-02ad-4956-95a6-ae03aa172988" containerID="0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e" exitCode=143 Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.564492 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f10708bc-02ad-4956-95a6-ae03aa172988","Type":"ContainerDied","Data":"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b"} Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.564520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f10708bc-02ad-4956-95a6-ae03aa172988","Type":"ContainerDied","Data":"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e"} Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.564529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f10708bc-02ad-4956-95a6-ae03aa172988","Type":"ContainerDied","Data":"d8c35a16f23800f75c01941a376d91e2f3aecbf3120d50dff54c29623c837dbf"} Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.564533 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.564559 4820 scope.go:117] "RemoveContainer" containerID="c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.566315 4820 generic.go:334] "Generic (PLEG): container finished" podID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerID="4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d" exitCode=143 Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.566346 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec23217-e99b-4b39-8be4-d4278275c14b","Type":"ContainerDied","Data":"4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d"} Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.581625 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-config-data\") pod \"f10708bc-02ad-4956-95a6-ae03aa172988\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.581730 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qbqd\" (UniqueName: \"kubernetes.io/projected/f10708bc-02ad-4956-95a6-ae03aa172988-kube-api-access-4qbqd\") pod \"f10708bc-02ad-4956-95a6-ae03aa172988\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.581837 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-combined-ca-bundle\") pod \"f10708bc-02ad-4956-95a6-ae03aa172988\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.581913 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10708bc-02ad-4956-95a6-ae03aa172988-logs\") pod \"f10708bc-02ad-4956-95a6-ae03aa172988\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.581968 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-public-tls-certs\") pod \"f10708bc-02ad-4956-95a6-ae03aa172988\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.582008 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-internal-tls-certs\") pod \"f10708bc-02ad-4956-95a6-ae03aa172988\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.582377 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10708bc-02ad-4956-95a6-ae03aa172988-logs" (OuterVolumeSpecName: "logs") pod "f10708bc-02ad-4956-95a6-ae03aa172988" (UID: "f10708bc-02ad-4956-95a6-ae03aa172988"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.582513 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10708bc-02ad-4956-95a6-ae03aa172988-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.585525 4820 scope.go:117] "RemoveContainer" containerID="0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.587001 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10708bc-02ad-4956-95a6-ae03aa172988-kube-api-access-4qbqd" (OuterVolumeSpecName: "kube-api-access-4qbqd") pod "f10708bc-02ad-4956-95a6-ae03aa172988" (UID: "f10708bc-02ad-4956-95a6-ae03aa172988"). InnerVolumeSpecName "kube-api-access-4qbqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.603656 4820 scope.go:117] "RemoveContainer" containerID="c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b" Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.604117 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b\": container with ID starting with c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b not found: ID does not exist" containerID="c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.604149 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b"} err="failed to get container status \"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b\": rpc error: code = NotFound desc = could not find container \"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b\": container with ID starting with c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b not found: ID does not exist" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.604169 4820 scope.go:117] "RemoveContainer" containerID="0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e" Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.604543 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e\": container with ID starting with 0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e not found: ID does not exist" containerID="0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.604603 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e"} err="failed to get container status \"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e\": rpc error: code = NotFound desc = could not find container \"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e\": container with ID starting with 0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e not found: ID does not exist" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.604623 4820 scope.go:117] "RemoveContainer" containerID="c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.604900 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b"} err="failed to get container status \"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b\": rpc error: code = NotFound desc = could not find container \"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b\": container with ID starting with c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b not found: ID does not exist" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.604923 4820 scope.go:117] "RemoveContainer" containerID="0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.605130 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e"} err="failed to get container status \"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e\": rpc error: code = NotFound desc = could not find container \"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e\": container with ID starting with 0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e not found: ID does not exist" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.609518 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-config-data" (OuterVolumeSpecName: "config-data") pod "f10708bc-02ad-4956-95a6-ae03aa172988" (UID: "f10708bc-02ad-4956-95a6-ae03aa172988"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.628225 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f10708bc-02ad-4956-95a6-ae03aa172988" (UID: "f10708bc-02ad-4956-95a6-ae03aa172988"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.638539 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f10708bc-02ad-4956-95a6-ae03aa172988" (UID: "f10708bc-02ad-4956-95a6-ae03aa172988"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.646702 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f10708bc-02ad-4956-95a6-ae03aa172988" (UID: "f10708bc-02ad-4956-95a6-ae03aa172988"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.684080 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.684110 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.684121 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.684130 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qbqd\" (UniqueName: \"kubernetes.io/projected/f10708bc-02ad-4956-95a6-ae03aa172988-kube-api-access-4qbqd\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.684140 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.889980 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.898649 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.917880 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.918298 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-log" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918315 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-log" Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.918327 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06da7378-1c64-43e9-8d97-63a92fe503fc" containerName="nova-manage" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918335 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="06da7378-1c64-43e9-8d97-63a92fe503fc" containerName="nova-manage" Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.918351 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerName="dnsmasq-dns" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918357 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerName="dnsmasq-dns" Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.918372 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-api" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918378 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-api" Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.918404 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerName="init" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918412 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerName="init" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918588 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerName="dnsmasq-dns" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918598 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-api" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918607 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="06da7378-1c64-43e9-8d97-63a92fe503fc" containerName="nova-manage" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918624 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-log" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.919509 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.923723 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.924168 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.931167 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.931856 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.089841 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-logs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.090283 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.090423 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhqdm\" (UniqueName: \"kubernetes.io/projected/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-kube-api-access-jhqdm\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.090518 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-config-data\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.090650 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-public-tls-certs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.090732 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.191984 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhqdm\" (UniqueName: \"kubernetes.io/projected/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-kube-api-access-jhqdm\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.192039 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-config-data\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.192158 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-public-tls-certs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.192197 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.192305 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-logs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.192330 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.192842 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-logs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.196479 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-public-tls-certs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.196587 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.196691 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-config-data\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.201585 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.210904 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhqdm\" (UniqueName: \"kubernetes.io/projected/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-kube-api-access-jhqdm\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.290590 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.740037 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:34 crc kubenswrapper[4820]: W0221 07:08:34.746337 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e16d52c_9322_49cf_9948_8d1c56c0a5ed.slice/crio-c3d5f579c82b6a6e56f3ced2d9af8224c99c222fa9ba6b80b60b56caad99b2a0 WatchSource:0}: Error finding container c3d5f579c82b6a6e56f3ced2d9af8224c99c222fa9ba6b80b60b56caad99b2a0: Status 404 returned error can't find the container with id c3d5f579c82b6a6e56f3ced2d9af8224c99c222fa9ba6b80b60b56caad99b2a0 Feb 21 07:08:35 crc kubenswrapper[4820]: I0221 07:08:35.586638 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e16d52c-9322-49cf-9948-8d1c56c0a5ed","Type":"ContainerStarted","Data":"841b7a62d1e6b92cb6679a13f353ab7adf29630b1c91e4ad2d0c98c9562682d7"} Feb 21 07:08:35 crc kubenswrapper[4820]: I0221 07:08:35.586871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e16d52c-9322-49cf-9948-8d1c56c0a5ed","Type":"ContainerStarted","Data":"23c184a5e245f5facd743c3a7e6bea11c07b828a4d25451cb2550eaa44349110"} Feb 21 07:08:35 crc kubenswrapper[4820]: I0221 07:08:35.586882 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e16d52c-9322-49cf-9948-8d1c56c0a5ed","Type":"ContainerStarted","Data":"c3d5f579c82b6a6e56f3ced2d9af8224c99c222fa9ba6b80b60b56caad99b2a0"} Feb 21 07:08:35 crc kubenswrapper[4820]: I0221 07:08:35.609353 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.609334309 podStartE2EDuration="2.609334309s" podCreationTimestamp="2026-02-21 07:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:08:35.605429702 +0000 UTC m=+1290.638513920" watchObservedRunningTime="2026-02-21 07:08:35.609334309 +0000 UTC m=+1290.642418507" Feb 21 07:08:35 crc kubenswrapper[4820]: I0221 07:08:35.707918 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" path="/var/lib/kubelet/pods/f10708bc-02ad-4956-95a6-ae03aa172988/volumes" Feb 21 07:08:35 crc kubenswrapper[4820]: I0221 07:08:35.940007 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:50358->10.217.0.197:8775: read: connection reset by peer" Feb 21 07:08:35 crc kubenswrapper[4820]: I0221 07:08:35.940048 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:50360->10.217.0.197:8775: read: connection reset by peer" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.377188 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.530775 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec23217-e99b-4b39-8be4-d4278275c14b-logs\") pod \"5ec23217-e99b-4b39-8be4-d4278275c14b\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.530944 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-combined-ca-bundle\") pod \"5ec23217-e99b-4b39-8be4-d4278275c14b\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.530986 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-config-data\") pod \"5ec23217-e99b-4b39-8be4-d4278275c14b\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.531149 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4chfb\" (UniqueName: \"kubernetes.io/projected/5ec23217-e99b-4b39-8be4-d4278275c14b-kube-api-access-4chfb\") pod \"5ec23217-e99b-4b39-8be4-d4278275c14b\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.531192 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-nova-metadata-tls-certs\") pod \"5ec23217-e99b-4b39-8be4-d4278275c14b\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.531299 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ec23217-e99b-4b39-8be4-d4278275c14b-logs" (OuterVolumeSpecName: "logs") pod "5ec23217-e99b-4b39-8be4-d4278275c14b" (UID: "5ec23217-e99b-4b39-8be4-d4278275c14b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.531657 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec23217-e99b-4b39-8be4-d4278275c14b-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.537739 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec23217-e99b-4b39-8be4-d4278275c14b-kube-api-access-4chfb" (OuterVolumeSpecName: "kube-api-access-4chfb") pod "5ec23217-e99b-4b39-8be4-d4278275c14b" (UID: "5ec23217-e99b-4b39-8be4-d4278275c14b"). InnerVolumeSpecName "kube-api-access-4chfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.560038 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ec23217-e99b-4b39-8be4-d4278275c14b" (UID: "5ec23217-e99b-4b39-8be4-d4278275c14b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.566840 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-config-data" (OuterVolumeSpecName: "config-data") pod "5ec23217-e99b-4b39-8be4-d4278275c14b" (UID: "5ec23217-e99b-4b39-8be4-d4278275c14b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.590005 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5ec23217-e99b-4b39-8be4-d4278275c14b" (UID: "5ec23217-e99b-4b39-8be4-d4278275c14b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.601450 4820 generic.go:334] "Generic (PLEG): container finished" podID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerID="a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e" exitCode=0 Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.601511 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.601549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec23217-e99b-4b39-8be4-d4278275c14b","Type":"ContainerDied","Data":"a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e"} Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.601610 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec23217-e99b-4b39-8be4-d4278275c14b","Type":"ContainerDied","Data":"394bcc8e3a2a160440b4df43c43758efe0ea6054acb1691dc8c447d26df83e49"} Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.601653 4820 scope.go:117] "RemoveContainer" containerID="a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.633006 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.633229 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4chfb\" (UniqueName: \"kubernetes.io/projected/5ec23217-e99b-4b39-8be4-d4278275c14b-kube-api-access-4chfb\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.633345 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.633427 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.660799 4820 scope.go:117] "RemoveContainer" containerID="4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.668020 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.682083 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.684893 4820 scope.go:117] "RemoveContainer" containerID="a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e" Feb 21 07:08:36 crc kubenswrapper[4820]: E0221 07:08:36.685571 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e\": container with ID starting with a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e not found: ID does not exist" containerID="a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.685614 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e"} err="failed to get container status \"a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e\": rpc error: code = NotFound desc = could not find container \"a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e\": container with ID starting with a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e not found: ID does not exist" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.685639 4820 scope.go:117] "RemoveContainer" containerID="4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d" Feb 21 07:08:36 crc kubenswrapper[4820]: E0221 07:08:36.685974 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d\": container with ID starting with 4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d not found: ID does not exist" containerID="4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.686018 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d"} err="failed to get container status \"4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d\": rpc error: code = NotFound desc = could not find container \"4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d\": container with ID starting with 4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d not found: ID does not exist" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.697488 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:08:36 crc kubenswrapper[4820]: E0221 07:08:36.698092 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-log" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.698173 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-log" Feb 21 07:08:36 crc kubenswrapper[4820]: E0221 07:08:36.698290 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-metadata" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.698351 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-metadata" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.698602 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-metadata" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.698681 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-log" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.699724 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.711532 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.711566 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.723859 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.836510 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-config-data\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.836584 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.836604 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9wf5\" (UniqueName: \"kubernetes.io/projected/a112132d-4a29-460c-985d-b0ca2ddb1aa6-kube-api-access-c9wf5\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.836855 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.836948 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a112132d-4a29-460c-985d-b0ca2ddb1aa6-logs\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.938848 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.938897 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9wf5\" (UniqueName: \"kubernetes.io/projected/a112132d-4a29-460c-985d-b0ca2ddb1aa6-kube-api-access-c9wf5\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.938988 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.939025 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a112132d-4a29-460c-985d-b0ca2ddb1aa6-logs\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.939114 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-config-data\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.939791 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a112132d-4a29-460c-985d-b0ca2ddb1aa6-logs\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.941858 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.942914 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.943037 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-config-data\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.954821 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9wf5\" (UniqueName: \"kubernetes.io/projected/a112132d-4a29-460c-985d-b0ca2ddb1aa6-kube-api-access-c9wf5\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.014982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.488112 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.618674 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a112132d-4a29-460c-985d-b0ca2ddb1aa6","Type":"ContainerStarted","Data":"a5ce2f4d318a8be4343d1c00aa8f9b38475fee7ae1d50bf1b4be7e34360eab36"} Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.620932 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce77df06-566a-45b8-83f6-b788a3c81757" containerID="e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec" exitCode=0 Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.620978 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce77df06-566a-45b8-83f6-b788a3c81757","Type":"ContainerDied","Data":"e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec"} Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.716015 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" path="/var/lib/kubelet/pods/5ec23217-e99b-4b39-8be4-d4278275c14b/volumes" Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.833650 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.961672 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-config-data\") pod \"ce77df06-566a-45b8-83f6-b788a3c81757\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.961855 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp4xn\" (UniqueName: \"kubernetes.io/projected/ce77df06-566a-45b8-83f6-b788a3c81757-kube-api-access-gp4xn\") pod \"ce77df06-566a-45b8-83f6-b788a3c81757\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.961880 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-combined-ca-bundle\") pod \"ce77df06-566a-45b8-83f6-b788a3c81757\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.968939 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce77df06-566a-45b8-83f6-b788a3c81757-kube-api-access-gp4xn" (OuterVolumeSpecName: "kube-api-access-gp4xn") pod "ce77df06-566a-45b8-83f6-b788a3c81757" (UID: "ce77df06-566a-45b8-83f6-b788a3c81757"). InnerVolumeSpecName "kube-api-access-gp4xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.000191 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-config-data" (OuterVolumeSpecName: "config-data") pod "ce77df06-566a-45b8-83f6-b788a3c81757" (UID: "ce77df06-566a-45b8-83f6-b788a3c81757"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.005463 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce77df06-566a-45b8-83f6-b788a3c81757" (UID: "ce77df06-566a-45b8-83f6-b788a3c81757"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.064602 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.064638 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp4xn\" (UniqueName: \"kubernetes.io/projected/ce77df06-566a-45b8-83f6-b788a3c81757-kube-api-access-gp4xn\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.064648 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.632699 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a112132d-4a29-460c-985d-b0ca2ddb1aa6","Type":"ContainerStarted","Data":"21769d7e4b9a4ff09d20e68b3668dbde7c57ce716fc232f4365f9370127b9d52"} Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.633038 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a112132d-4a29-460c-985d-b0ca2ddb1aa6","Type":"ContainerStarted","Data":"4cf28ea16018fb755adbd8f5f3ce5ec56799e0bc139946346840132dd9f3b8c1"} Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.634429 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce77df06-566a-45b8-83f6-b788a3c81757","Type":"ContainerDied","Data":"a44bc367b522a88e4a26e286fbe21e659f7f448ff1d55ea084262764e2d9e675"} Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.634497 4820 scope.go:117] "RemoveContainer" containerID="e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.634513 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.670425 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.670399742 podStartE2EDuration="2.670399742s" podCreationTimestamp="2026-02-21 07:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:08:38.659744889 +0000 UTC m=+1293.692829097" watchObservedRunningTime="2026-02-21 07:08:38.670399742 +0000 UTC m=+1293.703483940" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.688736 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.703497 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.714172 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:08:38 crc kubenswrapper[4820]: E0221 07:08:38.714612 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce77df06-566a-45b8-83f6-b788a3c81757" containerName="nova-scheduler-scheduler" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.714636 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce77df06-566a-45b8-83f6-b788a3c81757" containerName="nova-scheduler-scheduler" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.714880 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce77df06-566a-45b8-83f6-b788a3c81757" containerName="nova-scheduler-scheduler" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.715731 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.724540 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.725636 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.878607 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.878680 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-config-data\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.878756 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nbfq\" (UniqueName: \"kubernetes.io/projected/0ca75969-e299-435a-a607-d470d4ab831e-kube-api-access-4nbfq\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.980847 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.980928 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-config-data\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.981000 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nbfq\" (UniqueName: \"kubernetes.io/projected/0ca75969-e299-435a-a607-d470d4ab831e-kube-api-access-4nbfq\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.991378 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-config-data\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:39 crc kubenswrapper[4820]: I0221 07:08:39.004638 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:39 crc kubenswrapper[4820]: I0221 07:08:39.006884 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nbfq\" (UniqueName: \"kubernetes.io/projected/0ca75969-e299-435a-a607-d470d4ab831e-kube-api-access-4nbfq\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:39 crc kubenswrapper[4820]: I0221 07:08:39.034328 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:08:39 crc kubenswrapper[4820]: I0221 07:08:39.441548 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:08:39 crc kubenswrapper[4820]: W0221 07:08:39.447591 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ca75969_e299_435a_a607_d470d4ab831e.slice/crio-fc21a0a7c4dd2451190e354831336d49dba3efa2b6ff9cf991a583d8861094cf WatchSource:0}: Error finding container fc21a0a7c4dd2451190e354831336d49dba3efa2b6ff9cf991a583d8861094cf: Status 404 returned error can't find the container with id fc21a0a7c4dd2451190e354831336d49dba3efa2b6ff9cf991a583d8861094cf Feb 21 07:08:39 crc kubenswrapper[4820]: I0221 07:08:39.643554 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0ca75969-e299-435a-a607-d470d4ab831e","Type":"ContainerStarted","Data":"fc21a0a7c4dd2451190e354831336d49dba3efa2b6ff9cf991a583d8861094cf"} Feb 21 07:08:39 crc kubenswrapper[4820]: I0221 07:08:39.706898 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce77df06-566a-45b8-83f6-b788a3c81757" path="/var/lib/kubelet/pods/ce77df06-566a-45b8-83f6-b788a3c81757/volumes" Feb 21 07:08:40 crc kubenswrapper[4820]: I0221 07:08:40.656990 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0ca75969-e299-435a-a607-d470d4ab831e","Type":"ContainerStarted","Data":"f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd"} Feb 21 07:08:40 crc kubenswrapper[4820]: I0221 07:08:40.692990 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.692966942 podStartE2EDuration="2.692966942s" podCreationTimestamp="2026-02-21 07:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:08:40.682776042 +0000 UTC m=+1295.715860250" watchObservedRunningTime="2026-02-21 07:08:40.692966942 +0000 UTC m=+1295.726051140" Feb 21 07:08:42 crc kubenswrapper[4820]: I0221 07:08:42.015641 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:08:42 crc kubenswrapper[4820]: I0221 07:08:42.015728 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:08:44 crc kubenswrapper[4820]: I0221 07:08:44.034661 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 21 07:08:44 crc kubenswrapper[4820]: I0221 07:08:44.292317 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 07:08:44 crc kubenswrapper[4820]: I0221 07:08:44.292812 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 07:08:45 crc kubenswrapper[4820]: I0221 07:08:45.310557 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:45 crc kubenswrapper[4820]: I0221 07:08:45.310574 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:47 crc kubenswrapper[4820]: I0221 07:08:47.015246 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 07:08:47 crc kubenswrapper[4820]: I0221 07:08:47.015946 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 07:08:48 crc kubenswrapper[4820]: I0221 07:08:48.029917 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:48 crc kubenswrapper[4820]: I0221 07:08:48.029975 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:49 crc kubenswrapper[4820]: I0221 07:08:49.034965 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 21 07:08:49 crc kubenswrapper[4820]: I0221 07:08:49.059997 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 21 07:08:49 crc kubenswrapper[4820]: I0221 07:08:49.797007 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 21 07:08:53 crc kubenswrapper[4820]: I0221 07:08:53.862727 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 21 07:08:54 crc kubenswrapper[4820]: I0221 07:08:54.298025 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 07:08:54 crc kubenswrapper[4820]: I0221 07:08:54.299509 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 07:08:54 crc kubenswrapper[4820]: I0221 07:08:54.299574 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 07:08:54 crc kubenswrapper[4820]: I0221 07:08:54.305961 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 07:08:54 crc kubenswrapper[4820]: I0221 07:08:54.815437 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 07:08:54 crc kubenswrapper[4820]: I0221 07:08:54.822729 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 07:08:57 crc kubenswrapper[4820]: I0221 07:08:57.023349 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 07:08:57 crc kubenswrapper[4820]: I0221 07:08:57.025161 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 07:08:57 crc kubenswrapper[4820]: I0221 07:08:57.027878 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 07:08:57 crc kubenswrapper[4820]: I0221 07:08:57.847381 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 07:08:57 crc kubenswrapper[4820]: I0221 07:08:57.956440 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:08:57 crc kubenswrapper[4820]: I0221 07:08:57.956762 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="df55e56a-dbd2-4082-8915-c095d79a0445" containerName="kube-state-metrics" containerID="cri-o://c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804" gracePeriod=30 Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.455946 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.553589 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz5jf\" (UniqueName: \"kubernetes.io/projected/df55e56a-dbd2-4082-8915-c095d79a0445-kube-api-access-pz5jf\") pod \"df55e56a-dbd2-4082-8915-c095d79a0445\" (UID: \"df55e56a-dbd2-4082-8915-c095d79a0445\") " Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.558316 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df55e56a-dbd2-4082-8915-c095d79a0445-kube-api-access-pz5jf" (OuterVolumeSpecName: "kube-api-access-pz5jf") pod "df55e56a-dbd2-4082-8915-c095d79a0445" (UID: "df55e56a-dbd2-4082-8915-c095d79a0445"). InnerVolumeSpecName "kube-api-access-pz5jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.656522 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz5jf\" (UniqueName: \"kubernetes.io/projected/df55e56a-dbd2-4082-8915-c095d79a0445-kube-api-access-pz5jf\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.847867 4820 generic.go:334] "Generic (PLEG): container finished" podID="df55e56a-dbd2-4082-8915-c095d79a0445" containerID="c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804" exitCode=2 Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.848052 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.848104 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"df55e56a-dbd2-4082-8915-c095d79a0445","Type":"ContainerDied","Data":"c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804"} Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.848139 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"df55e56a-dbd2-4082-8915-c095d79a0445","Type":"ContainerDied","Data":"60eb280dafd317b213ced0ce92cb061208211ecad999bed743c8a76df9e0ad8d"} Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.848158 4820 scope.go:117] "RemoveContainer" containerID="c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.872961 4820 scope.go:117] "RemoveContainer" containerID="c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804" Feb 21 07:08:58 crc kubenswrapper[4820]: E0221 07:08:58.873461 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804\": container with ID starting with c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804 not found: ID does not exist" containerID="c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.873506 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804"} err="failed to get container status \"c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804\": rpc error: code = NotFound desc = could not find container \"c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804\": container with ID starting with c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804 not found: ID does not exist" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.891028 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.905145 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.915979 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:08:58 crc kubenswrapper[4820]: E0221 07:08:58.916546 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df55e56a-dbd2-4082-8915-c095d79a0445" containerName="kube-state-metrics" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.916570 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df55e56a-dbd2-4082-8915-c095d79a0445" containerName="kube-state-metrics" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.916810 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="df55e56a-dbd2-4082-8915-c095d79a0445" containerName="kube-state-metrics" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.925931 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.926095 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.927749 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.928120 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.063333 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.063389 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq6fc\" (UniqueName: \"kubernetes.io/projected/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-api-access-cq6fc\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.063582 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.063872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.165527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.165591 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq6fc\" (UniqueName: \"kubernetes.io/projected/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-api-access-cq6fc\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.165651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.165702 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.172612 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.172819 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.172843 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.186453 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq6fc\" (UniqueName: \"kubernetes.io/projected/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-api-access-cq6fc\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.252853 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.612182 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.616672 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-central-agent" containerID="cri-o://131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d" gracePeriod=30 Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.616766 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="proxy-httpd" containerID="cri-o://f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff" gracePeriod=30 Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.616831 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-notification-agent" containerID="cri-o://986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5" gracePeriod=30 Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.617024 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="sg-core" containerID="cri-o://aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46" gracePeriod=30 Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.680134 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:08:59 crc kubenswrapper[4820]: W0221 07:08:59.684736 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eb570ff_2a5e_4913_a84f_346579eaa104.slice/crio-71365a9e22568ef1b7939e8176b425016fd726c9f3eda1b1728111b2c07781f8 WatchSource:0}: Error finding container 71365a9e22568ef1b7939e8176b425016fd726c9f3eda1b1728111b2c07781f8: Status 404 returned error can't find the container with id 71365a9e22568ef1b7939e8176b425016fd726c9f3eda1b1728111b2c07781f8 Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.687010 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.711808 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df55e56a-dbd2-4082-8915-c095d79a0445" path="/var/lib/kubelet/pods/df55e56a-dbd2-4082-8915-c095d79a0445/volumes" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.862953 4820 generic.go:334] "Generic (PLEG): container finished" podID="118c08af-2bde-440a-a9cf-ad089288aae6" containerID="f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff" exitCode=0 Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.862989 4820 generic.go:334] "Generic (PLEG): container finished" podID="118c08af-2bde-440a-a9cf-ad089288aae6" containerID="aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46" exitCode=2 Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.862979 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerDied","Data":"f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff"} Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.863031 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerDied","Data":"aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46"} Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.866425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9eb570ff-2a5e-4913-a84f-346579eaa104","Type":"ContainerStarted","Data":"71365a9e22568ef1b7939e8176b425016fd726c9f3eda1b1728111b2c07781f8"} Feb 21 07:09:00 crc kubenswrapper[4820]: I0221 07:09:00.888953 4820 generic.go:334] "Generic (PLEG): container finished" podID="118c08af-2bde-440a-a9cf-ad089288aae6" containerID="131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d" exitCode=0 Feb 21 07:09:00 crc kubenswrapper[4820]: I0221 07:09:00.889076 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerDied","Data":"131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d"} Feb 21 07:09:00 crc kubenswrapper[4820]: I0221 07:09:00.892063 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9eb570ff-2a5e-4913-a84f-346579eaa104","Type":"ContainerStarted","Data":"4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632"} Feb 21 07:09:00 crc kubenswrapper[4820]: I0221 07:09:00.892294 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 21 07:09:00 crc kubenswrapper[4820]: I0221 07:09:00.909524 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.53959628 podStartE2EDuration="2.909506853s" podCreationTimestamp="2026-02-21 07:08:58 +0000 UTC" firstStartedPulling="2026-02-21 07:08:59.686774779 +0000 UTC m=+1314.719858977" lastFinishedPulling="2026-02-21 07:09:00.056685352 +0000 UTC m=+1315.089769550" observedRunningTime="2026-02-21 07:09:00.907904879 +0000 UTC m=+1315.940989077" watchObservedRunningTime="2026-02-21 07:09:00.909506853 +0000 UTC m=+1315.942591041" Feb 21 07:09:02 crc kubenswrapper[4820]: I0221 07:09:02.922826 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:09:02 crc kubenswrapper[4820]: I0221 07:09:02.924487 4820 generic.go:334] "Generic (PLEG): container finished" podID="118c08af-2bde-440a-a9cf-ad089288aae6" containerID="986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5" exitCode=0 Feb 21 07:09:02 crc kubenswrapper[4820]: I0221 07:09:02.924569 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerDied","Data":"986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5"} Feb 21 07:09:02 crc kubenswrapper[4820]: I0221 07:09:02.924626 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerDied","Data":"a3ebef633d35f88845de4a2b21cdc59126d4fd49e7842d92e5d5e3974ce0962e"} Feb 21 07:09:02 crc kubenswrapper[4820]: I0221 07:09:02.924644 4820 scope.go:117] "RemoveContainer" containerID="f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff" Feb 21 07:09:02 crc kubenswrapper[4820]: I0221 07:09:02.951653 4820 scope.go:117] "RemoveContainer" containerID="aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.000858 4820 scope.go:117] "RemoveContainer" containerID="986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.032606 4820 scope.go:117] "RemoveContainer" containerID="131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.040041 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-scripts\") pod \"118c08af-2bde-440a-a9cf-ad089288aae6\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.040076 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-log-httpd\") pod \"118c08af-2bde-440a-a9cf-ad089288aae6\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.040094 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-run-httpd\") pod \"118c08af-2bde-440a-a9cf-ad089288aae6\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.040134 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-config-data\") pod \"118c08af-2bde-440a-a9cf-ad089288aae6\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.040338 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-combined-ca-bundle\") pod \"118c08af-2bde-440a-a9cf-ad089288aae6\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.040363 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzf7k\" (UniqueName: \"kubernetes.io/projected/118c08af-2bde-440a-a9cf-ad089288aae6-kube-api-access-zzf7k\") pod \"118c08af-2bde-440a-a9cf-ad089288aae6\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.040390 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-sg-core-conf-yaml\") pod \"118c08af-2bde-440a-a9cf-ad089288aae6\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.041556 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "118c08af-2bde-440a-a9cf-ad089288aae6" (UID: "118c08af-2bde-440a-a9cf-ad089288aae6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.041735 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "118c08af-2bde-440a-a9cf-ad089288aae6" (UID: "118c08af-2bde-440a-a9cf-ad089288aae6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.046757 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/118c08af-2bde-440a-a9cf-ad089288aae6-kube-api-access-zzf7k" (OuterVolumeSpecName: "kube-api-access-zzf7k") pod "118c08af-2bde-440a-a9cf-ad089288aae6" (UID: "118c08af-2bde-440a-a9cf-ad089288aae6"). InnerVolumeSpecName "kube-api-access-zzf7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.047816 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-scripts" (OuterVolumeSpecName: "scripts") pod "118c08af-2bde-440a-a9cf-ad089288aae6" (UID: "118c08af-2bde-440a-a9cf-ad089288aae6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.054038 4820 scope.go:117] "RemoveContainer" containerID="f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff" Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.054765 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff\": container with ID starting with f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff not found: ID does not exist" containerID="f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.054845 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff"} err="failed to get container status \"f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff\": rpc error: code = NotFound desc = could not find container \"f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff\": container with ID starting with f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff not found: ID does not exist" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.054879 4820 scope.go:117] "RemoveContainer" containerID="aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46" Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.056305 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46\": container with ID starting with aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46 not found: ID does not exist" containerID="aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.056383 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46"} err="failed to get container status \"aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46\": rpc error: code = NotFound desc = could not find container \"aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46\": container with ID starting with aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46 not found: ID does not exist" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.056436 4820 scope.go:117] "RemoveContainer" containerID="986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5" Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.056793 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5\": container with ID starting with 986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5 not found: ID does not exist" containerID="986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.056822 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5"} err="failed to get container status \"986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5\": rpc error: code = NotFound desc = could not find container \"986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5\": container with ID starting with 986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5 not found: ID does not exist" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.056841 4820 scope.go:117] "RemoveContainer" containerID="131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d" Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.057219 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d\": container with ID starting with 131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d not found: ID does not exist" containerID="131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.057268 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d"} err="failed to get container status \"131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d\": rpc error: code = NotFound desc = could not find container \"131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d\": container with ID starting with 131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d not found: ID does not exist" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.071771 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "118c08af-2bde-440a-a9cf-ad089288aae6" (UID: "118c08af-2bde-440a-a9cf-ad089288aae6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.116645 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "118c08af-2bde-440a-a9cf-ad089288aae6" (UID: "118c08af-2bde-440a-a9cf-ad089288aae6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.138475 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-config-data" (OuterVolumeSpecName: "config-data") pod "118c08af-2bde-440a-a9cf-ad089288aae6" (UID: "118c08af-2bde-440a-a9cf-ad089288aae6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.143218 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.143268 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.143280 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.143289 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.143298 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.143309 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzf7k\" (UniqueName: \"kubernetes.io/projected/118c08af-2bde-440a-a9cf-ad089288aae6-kube-api-access-zzf7k\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.143318 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.933942 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.970480 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.980653 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990275 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.990635 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="sg-core" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990653 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="sg-core" Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.990673 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-central-agent" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990679 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-central-agent" Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.990693 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-notification-agent" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990698 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-notification-agent" Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.990712 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="proxy-httpd" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990718 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="proxy-httpd" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990876 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="proxy-httpd" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990891 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="sg-core" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990903 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-central-agent" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990913 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-notification-agent" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.992522 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.995068 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.995483 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.995655 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.001159 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.061611 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz8jf\" (UniqueName: \"kubernetes.io/projected/0a392f2a-5040-417a-b860-13fa886ea2a2-kube-api-access-wz8jf\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.061875 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-scripts\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.062054 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-log-httpd\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.062172 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-config-data\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.062301 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-run-httpd\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.062504 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.062538 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.062622 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.164789 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.164830 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.164853 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.164883 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz8jf\" (UniqueName: \"kubernetes.io/projected/0a392f2a-5040-417a-b860-13fa886ea2a2-kube-api-access-wz8jf\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.164923 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-scripts\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.164962 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-log-httpd\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.164985 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-config-data\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.165021 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-run-httpd\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.165844 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-run-httpd\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.165857 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-log-httpd\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.168916 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.169229 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-scripts\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.169876 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.170530 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-config-data\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.170979 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.182550 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz8jf\" (UniqueName: \"kubernetes.io/projected/0a392f2a-5040-417a-b860-13fa886ea2a2-kube-api-access-wz8jf\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.315132 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.770647 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.944708 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerStarted","Data":"0c38be7124a920b640712dd690755259fce0c90bcf50290cc80460e97c079adc"} Feb 21 07:09:05 crc kubenswrapper[4820]: I0221 07:09:05.712612 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" path="/var/lib/kubelet/pods/118c08af-2bde-440a-a9cf-ad089288aae6/volumes" Feb 21 07:09:05 crc kubenswrapper[4820]: I0221 07:09:05.955438 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerStarted","Data":"28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6"} Feb 21 07:09:06 crc kubenswrapper[4820]: I0221 07:09:06.965473 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerStarted","Data":"c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208"} Feb 21 07:09:06 crc kubenswrapper[4820]: I0221 07:09:06.966429 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerStarted","Data":"e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3"} Feb 21 07:09:08 crc kubenswrapper[4820]: I0221 07:09:08.989727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerStarted","Data":"eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d"} Feb 21 07:09:08 crc kubenswrapper[4820]: I0221 07:09:08.990371 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:09:09 crc kubenswrapper[4820]: I0221 07:09:09.025266 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.868805319 podStartE2EDuration="6.025248873s" podCreationTimestamp="2026-02-21 07:09:03 +0000 UTC" firstStartedPulling="2026-02-21 07:09:04.777179167 +0000 UTC m=+1319.810263365" lastFinishedPulling="2026-02-21 07:09:07.933622711 +0000 UTC m=+1322.966706919" observedRunningTime="2026-02-21 07:09:09.010385975 +0000 UTC m=+1324.043470193" watchObservedRunningTime="2026-02-21 07:09:09.025248873 +0000 UTC m=+1324.058333081" Feb 21 07:09:09 crc kubenswrapper[4820]: I0221 07:09:09.268168 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 21 07:09:34 crc kubenswrapper[4820]: I0221 07:09:34.323435 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.296626 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-cd19-account-create-update-77csv"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.298338 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.324622 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.347509 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-cd19-account-create-update-77csv"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.365463 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bcvpx"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.366854 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.372485 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.393378 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-cd19-account-create-update-ccc55"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.433326 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-cd19-account-create-update-ccc55"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.464827 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95200e0a-ca93-4303-80af-8b950ddc8746-operator-scripts\") pod \"glance-cd19-account-create-update-77csv\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.464957 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/73b1b012-98c9-49cf-852d-a2ff95b746cf-kube-api-access-8btrq\") pod \"root-account-create-update-bcvpx\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.464987 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts\") pod \"root-account-create-update-bcvpx\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.465057 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgms7\" (UniqueName: \"kubernetes.io/projected/95200e0a-ca93-4303-80af-8b950ddc8746-kube-api-access-lgms7\") pod \"glance-cd19-account-create-update-77csv\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.586098 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/73b1b012-98c9-49cf-852d-a2ff95b746cf-kube-api-access-8btrq\") pod \"root-account-create-update-bcvpx\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.586144 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts\") pod \"root-account-create-update-bcvpx\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.586208 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgms7\" (UniqueName: \"kubernetes.io/projected/95200e0a-ca93-4303-80af-8b950ddc8746-kube-api-access-lgms7\") pod \"glance-cd19-account-create-update-77csv\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.586287 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95200e0a-ca93-4303-80af-8b950ddc8746-operator-scripts\") pod \"glance-cd19-account-create-update-77csv\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.587123 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95200e0a-ca93-4303-80af-8b950ddc8746-operator-scripts\") pod \"glance-cd19-account-create-update-77csv\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.587917 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts\") pod \"root-account-create-update-bcvpx\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.596391 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bcvpx"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.641877 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgms7\" (UniqueName: \"kubernetes.io/projected/95200e0a-ca93-4303-80af-8b950ddc8746-kube-api-access-lgms7\") pod \"glance-cd19-account-create-update-77csv\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.652865 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/73b1b012-98c9-49cf-852d-a2ff95b746cf-kube-api-access-8btrq\") pod \"root-account-create-update-bcvpx\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.690718 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.731493 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d0b59ad-da5f-4279-8aa4-f56bd575a5ce" path="/var/lib/kubelet/pods/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce/volumes" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.732073 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-67dd4454fc-lr4lq"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.746574 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.748510 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7b6747758b-gs56z"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.750569 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.789029 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.793812 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h77jl\" (UniqueName: \"kubernetes.io/projected/61de836b-112e-4002-80c7-5ab77d4b9069-kube-api-access-h77jl\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.793869 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data-custom\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.793889 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.793937 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61de836b-112e-4002-80c7-5ab77d4b9069-logs\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.793965 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-combined-ca-bundle\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.793987 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-combined-ca-bundle\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.794013 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5916b629-5e69-4ad3-9180-c07181d3ff37-logs\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.794044 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r9h6\" (UniqueName: \"kubernetes.io/projected/5916b629-5e69-4ad3-9180-c07181d3ff37-kube-api-access-6r9h6\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.794071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.794102 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data-custom\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.840841 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-67dd4454fc-lr4lq"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.882210 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b6747758b-gs56z"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895301 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61de836b-112e-4002-80c7-5ab77d4b9069-logs\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895497 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-combined-ca-bundle\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-combined-ca-bundle\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5916b629-5e69-4ad3-9180-c07181d3ff37-logs\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895590 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r9h6\" (UniqueName: \"kubernetes.io/projected/5916b629-5e69-4ad3-9180-c07181d3ff37-kube-api-access-6r9h6\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895646 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data-custom\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895697 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h77jl\" (UniqueName: \"kubernetes.io/projected/61de836b-112e-4002-80c7-5ab77d4b9069-kube-api-access-h77jl\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895724 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data-custom\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.896988 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61de836b-112e-4002-80c7-5ab77d4b9069-logs\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.901091 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5916b629-5e69-4ad3-9180-c07181d3ff37-logs\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.905427 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bzcnx"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.906182 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-combined-ca-bundle\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.908557 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.913172 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data-custom\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.918430 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.921131 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-combined-ca-bundle\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.922928 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data-custom\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.936044 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r9h6\" (UniqueName: \"kubernetes.io/projected/5916b629-5e69-4ad3-9180-c07181d3ff37-kube-api-access-6r9h6\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.936662 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.956802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h77jl\" (UniqueName: \"kubernetes.io/projected/61de836b-112e-4002-80c7-5ab77d4b9069-kube-api-access-h77jl\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.960282 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bzcnx"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.997286 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.002299 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.002383 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data podName:fa49984a-9511-4449-adc6-997899961f73 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:56.502369334 +0000 UTC m=+1371.535453532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data") pod "rabbitmq-cell1-server-0" (UID: "fa49984a-9511-4449-adc6-997899961f73") : configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.003829 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c516-account-create-update-vrfb9"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.007862 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.018649 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.018846 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="d7d6374d-1595-4586-b161-d199a2b39068" containerName="openstackclient" containerID="cri-o://909cf351ee5d3a426633b14e5a872b68e1e1f2b2e35b195ce445cb68523c8342" gracePeriod=2 Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.025968 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.028321 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.048370 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.051563 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c516-account-create-update-vrfb9"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.077076 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4e9a-account-create-update-4996c"] Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.077784 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d6374d-1595-4586-b161-d199a2b39068" containerName="openstackclient" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.077801 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d6374d-1595-4586-b161-d199a2b39068" containerName="openstackclient" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.078116 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7d6374d-1595-4586-b161-d199a2b39068" containerName="openstackclient" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.079165 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.106479 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.130870 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c8ba-account-create-update-4wwws"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.132553 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.159267 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.179935 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.180220 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="ovn-northd" containerID="cri-o://803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" gracePeriod=30 Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.180411 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="openstack-network-exporter" containerID="cri-o://0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d" gracePeriod=30 Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.201025 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c8ba-account-create-update-4wwws"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.220127 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-operator-scripts\") pod \"barbican-4e9a-account-create-update-4996c\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.220191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlsm9\" (UniqueName: \"kubernetes.io/projected/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-kube-api-access-wlsm9\") pod \"barbican-4e9a-account-create-update-4996c\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.220216 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b282c5-1012-4188-bc31-b8e7e794bb77-operator-scripts\") pod \"neutron-c516-account-create-update-vrfb9\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.220286 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66b7c\" (UniqueName: \"kubernetes.io/projected/67b282c5-1012-4188-bc31-b8e7e794bb77-kube-api-access-66b7c\") pod \"neutron-c516-account-create-update-vrfb9\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.220532 4820 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.220637 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data podName:5916b629-5e69-4ad3-9180-c07181d3ff37 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:56.720615748 +0000 UTC m=+1371.753699956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data") pod "barbican-keystone-listener-7b6747758b-gs56z" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37") : secret "barbican-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.228994 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4e9a-account-create-update-4996c"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.359119 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-operator-scripts\") pod \"barbican-4e9a-account-create-update-4996c\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.359200 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlsm9\" (UniqueName: \"kubernetes.io/projected/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-kube-api-access-wlsm9\") pod \"barbican-4e9a-account-create-update-4996c\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.359230 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b282c5-1012-4188-bc31-b8e7e794bb77-operator-scripts\") pod \"neutron-c516-account-create-update-vrfb9\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.359290 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js7bk\" (UniqueName: \"kubernetes.io/projected/0fa0449e-f842-4605-b814-1e7ede08a5b7-kube-api-access-js7bk\") pod \"placement-c8ba-account-create-update-4wwws\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.359358 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66b7c\" (UniqueName: \"kubernetes.io/projected/67b282c5-1012-4188-bc31-b8e7e794bb77-kube-api-access-66b7c\") pod \"neutron-c516-account-create-update-vrfb9\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.359424 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fa0449e-f842-4605-b814-1e7ede08a5b7-operator-scripts\") pod \"placement-c8ba-account-create-update-4wwws\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.360606 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-operator-scripts\") pod \"barbican-4e9a-account-create-update-4996c\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.363129 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c516-account-create-update-mxhpl"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.384262 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b282c5-1012-4188-bc31-b8e7e794bb77-operator-scripts\") pod \"neutron-c516-account-create-update-vrfb9\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.403698 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c516-account-create-update-mxhpl"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.460980 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js7bk\" (UniqueName: \"kubernetes.io/projected/0fa0449e-f842-4605-b814-1e7ede08a5b7-kube-api-access-js7bk\") pod \"placement-c8ba-account-create-update-4wwws\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.461070 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fa0449e-f842-4605-b814-1e7ede08a5b7-operator-scripts\") pod \"placement-c8ba-account-create-update-4wwws\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.461939 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fa0449e-f842-4605-b814-1e7ede08a5b7-operator-scripts\") pod \"placement-c8ba-account-create-update-4wwws\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.483541 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-vfn4b"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.521301 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c8ba-account-create-update-wmp66"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.528975 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-vfn4b"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.531409 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlsm9\" (UniqueName: \"kubernetes.io/projected/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-kube-api-access-wlsm9\") pod \"barbican-4e9a-account-create-update-4996c\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.557049 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66b7c\" (UniqueName: \"kubernetes.io/projected/67b282c5-1012-4188-bc31-b8e7e794bb77-kube-api-access-66b7c\") pod \"neutron-c516-account-create-update-vrfb9\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.558852 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js7bk\" (UniqueName: \"kubernetes.io/projected/0fa0449e-f842-4605-b814-1e7ede08a5b7-kube-api-access-js7bk\") pod \"placement-c8ba-account-create-update-4wwws\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.592191 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.595149 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="openstack-network-exporter" containerID="cri-o://087725d49d3eda013af8b6833f156a663fa05bd1ae58e6cd6c97f96a9a387f5e" gracePeriod=300 Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.603324 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.603416 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data podName:fa49984a-9511-4449-adc6-997899961f73 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:57.60338754 +0000 UTC m=+1372.636471738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data") pod "rabbitmq-cell1-server-0" (UID: "fa49984a-9511-4449-adc6-997899961f73") : configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.650803 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c8ba-account-create-update-wmp66"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.654695 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.730780 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.731106 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-5knjn"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.757865 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4e9a-account-create-update-55xqx"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.761201 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.782947 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-5knjn"] Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.810393 4820 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.810480 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data podName:5916b629-5e69-4ad3-9180-c07181d3ff37 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:57.810457019 +0000 UTC m=+1372.843541217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data") pod "barbican-keystone-listener-7b6747758b-gs56z" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37") : secret "barbican-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.810820 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4e9a-account-create-update-55xqx"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.896534 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.918481 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="openstack-network-exporter" containerID="cri-o://9b2390a7c05e56db19bda74dfb3d9d4dd876051e208b624fc3be25ba34452030" gracePeriod=300 Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.943487 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="ovsdbserver-sb" containerID="cri-o://763a6b46ea0010465aaf5a12dc0a5759f78313371c19cbeb4189a6c04b0f99d4" gracePeriod=300 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.022684 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a80b-account-create-update-n9j8x"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.077933 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a80b-account-create-update-n9j8x"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.109777 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-lj8d2"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.169585 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="ovsdbserver-nb" containerID="cri-o://e5bf8c6230a3cf28cb4d6810d400ab586125f96f1e1d8e1e052c5ad5a57074e9" gracePeriod=300 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.185690 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-lj8d2"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.230753 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a80b-account-create-update-w6rwf"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.239415 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.242805 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.273623 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a80b-account-create-update-w6rwf"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.297414 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-wdvf7"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.308508 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-wdvf7"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.322107 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-p2v97"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.322424 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-p2v97" podUID="96d07086-c2e8-4351-bac8-b99c485826c4" containerName="openstack-network-exporter" containerID="cri-o://4674ea514756bc9a67ce3b0d32627dbc628c1c0dbddfbaace5ee5ef4c003c5ce" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.352140 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sfpp9"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.366865 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-rwsk7"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.391381 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.391668 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="cinder-scheduler" containerID="cri-o://3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.392112 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="probe" containerID="cri-o://275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.407727 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.408032 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api-log" containerID="cri-o://9d5edce8d453916f71c03d27dbadd27156155685e8222590f97716c227514067" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.409660 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api" containerID="cri-o://765217377e07f3bfb154c1825d8e9aa8ce15d008d63d260388c182a058e66b3c" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.419173 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-6ecb-account-create-update-q98t2"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.431634 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed145514-af37-491d-bc62-2f84273b4fd0-operator-scripts\") pod \"nova-api-a80b-account-create-update-w6rwf\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.431768 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76nqx\" (UniqueName: \"kubernetes.io/projected/ed145514-af37-491d-bc62-2f84273b4fd0-kube-api-access-76nqx\") pod \"nova-api-a80b-account-create-update-w6rwf\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.453317 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-6ecb-account-create-update-q98t2"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.479631 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.488857 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-smnkd"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.518992 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-smnkd"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.533625 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed145514-af37-491d-bc62-2f84273b4fd0-operator-scripts\") pod \"nova-api-a80b-account-create-update-w6rwf\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.537358 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed145514-af37-491d-bc62-2f84273b4fd0-operator-scripts\") pod \"nova-api-a80b-account-create-update-w6rwf\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.537725 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76nqx\" (UniqueName: \"kubernetes.io/projected/ed145514-af37-491d-bc62-2f84273b4fd0-kube-api-access-76nqx\") pod \"nova-api-a80b-account-create-update-w6rwf\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.544662 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-96c7-account-create-update-fhgrk"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.560644 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-96c7-account-create-update-fhgrk"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.579343 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.579800 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-log" containerID="cri-o://d451738c8f6f4e609144531dffaae738937778e3a27f1cdf9e62e3a7d1480b96" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.580280 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-httpd" containerID="cri-o://0c7af27d09ebb00239341b37c16edf7677edec982563c281c9fa2b1e765704e3" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.584037 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76nqx\" (UniqueName: \"kubernetes.io/projected/ed145514-af37-491d-bc62-2f84273b4fd0-kube-api-access-76nqx\") pod \"nova-api-a80b-account-create-update-w6rwf\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.612441 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.613380 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-log" containerID="cri-o://c89955e8456635f9567d07ebef7a2fae175b713a07f50ea3684f6959998a79da" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.614606 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-httpd" containerID="cri-o://c6eec58d937060e917865b55d6939557fd730b3dc3294db9f26e433da11bcf3a" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: E0221 07:09:57.640559 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:57 crc kubenswrapper[4820]: E0221 07:09:57.647518 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 21 07:09:57 crc kubenswrapper[4820]: E0221 07:09:57.647598 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data podName:8b1242f9-d2ac-493c-bc89-43f7be597a75 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:58.147567467 +0000 UTC m=+1373.180651665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data") pod "rabbitmq-server-0" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75") : configmap "rabbitmq-config-data" not found Feb 21 07:09:57 crc kubenswrapper[4820]: E0221 07:09:57.648330 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data podName:fa49984a-9511-4449-adc6-997899961f73 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:59.648310178 +0000 UTC m=+1374.681394376 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data") pod "rabbitmq-cell1-server-0" (UID: "fa49984a-9511-4449-adc6-997899961f73") : configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.664708 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-rf689"] Feb 21 07:09:57 crc kubenswrapper[4820]: E0221 07:09:57.833025 4820 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 21 07:09:57 crc kubenswrapper[4820]: E0221 07:09:57.833111 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data podName:5916b629-5e69-4ad3-9180-c07181d3ff37 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:59.833088229 +0000 UTC m=+1374.866172427 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data") pod "barbican-keystone-listener-7b6747758b-gs56z" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37") : secret "barbican-config-data" not found Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.833643 4820 generic.go:334] "Generic (PLEG): container finished" podID="899bd84b-c67f-4a89-9f92-a68094530566" containerID="9d5edce8d453916f71c03d27dbadd27156155685e8222590f97716c227514067" exitCode=143 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.861472 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_455bfe0a-a135-4900-8b15-ce584dc8a5bb/ovsdbserver-sb/0.log" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.861765 4820 generic.go:334] "Generic (PLEG): container finished" podID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerID="087725d49d3eda013af8b6833f156a663fa05bd1ae58e6cd6c97f96a9a387f5e" exitCode=2 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.861781 4820 generic.go:334] "Generic (PLEG): container finished" podID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerID="763a6b46ea0010465aaf5a12dc0a5759f78313371c19cbeb4189a6c04b0f99d4" exitCode=143 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.867992 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085b95c8-2602-461b-8a08-91aff75f97a0" path="/var/lib/kubelet/pods/085b95c8-2602-461b-8a08-91aff75f97a0/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.869755 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb" path="/var/lib/kubelet/pods/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.870738 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8a463c-63a8-424f-a3ab-4e46390b8cca" path="/var/lib/kubelet/pods/8e8a463c-63a8-424f-a3ab-4e46390b8cca/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.871316 4820 generic.go:334] "Generic (PLEG): container finished" podID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerID="0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d" exitCode=2 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.871492 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b290d702-774e-48b8-a243-5a9c648740a7" path="/var/lib/kubelet/pods/b290d702-774e-48b8-a243-5a9c648740a7/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.873547 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b400c916-2ba9-4d7e-b9f5-6044605f279c" path="/var/lib/kubelet/pods/b400c916-2ba9-4d7e-b9f5-6044605f279c/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.874705 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8063e5a-6b15-4855-9ae2-5fdcc912b472" path="/var/lib/kubelet/pods/b8063e5a-6b15-4855-9ae2-5fdcc912b472/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.875661 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe51cee-e461-4a5f-86d9-0eb600da3a82" path="/var/lib/kubelet/pods/bbe51cee-e461-4a5f-86d9-0eb600da3a82/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.881765 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-p2v97_96d07086-c2e8-4351-bac8-b99c485826c4/openstack-network-exporter/0.log" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.881829 4820 generic.go:334] "Generic (PLEG): container finished" podID="96d07086-c2e8-4351-bac8-b99c485826c4" containerID="4674ea514756bc9a67ce3b0d32627dbc628c1c0dbddfbaace5ee5ef4c003c5ce" exitCode=2 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.882100 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5" path="/var/lib/kubelet/pods/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.883924 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" path="/var/lib/kubelet/pods/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.886312 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27134bb-c9b2-42d4-bad5-81e7b05874e7" path="/var/lib/kubelet/pods/e27134bb-c9b2-42d4-bad5-81e7b05874e7/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.899061 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_df0c3ff8-e36f-4539-a7da-9d2b1e7a146d/ovsdbserver-nb/0.log" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.899112 4820 generic.go:334] "Generic (PLEG): container finished" podID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerID="9b2390a7c05e56db19bda74dfb3d9d4dd876051e208b624fc3be25ba34452030" exitCode=2 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.899127 4820 generic.go:334] "Generic (PLEG): container finished" podID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerID="e5bf8c6230a3cf28cb4d6810d400ab586125f96f1e1d8e1e052c5ad5a57074e9" exitCode=143 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.904193 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" path="/var/lib/kubelet/pods/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.909492 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b51414-aa8f-49ad-b662-b3c44eb0bc62" path="/var/lib/kubelet/pods/f9b51414-aa8f-49ad-b662-b3c44eb0bc62/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.917004 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899bd84b-c67f-4a89-9f92-a68094530566","Type":"ContainerDied","Data":"9d5edce8d453916f71c03d27dbadd27156155685e8222590f97716c227514067"} Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.917167 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-rf689"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.917203 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2x7vh"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.917517 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"455bfe0a-a135-4900-8b15-ce584dc8a5bb","Type":"ContainerDied","Data":"087725d49d3eda013af8b6833f156a663fa05bd1ae58e6cd6c97f96a9a387f5e"} Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.917544 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2x7vh"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.917561 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"455bfe0a-a135-4900-8b15-ce584dc8a5bb","Type":"ContainerDied","Data":"763a6b46ea0010465aaf5a12dc0a5759f78313371c19cbeb4189a6c04b0f99d4"} Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.918313 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a5b71e95-fe49-48b2-8d7b-575e17855d52","Type":"ContainerDied","Data":"0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d"} Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.918388 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p2v97" event={"ID":"96d07086-c2e8-4351-bac8-b99c485826c4","Type":"ContainerDied","Data":"4674ea514756bc9a67ce3b0d32627dbc628c1c0dbddfbaace5ee5ef4c003c5ce"} Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.918404 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d","Type":"ContainerDied","Data":"9b2390a7c05e56db19bda74dfb3d9d4dd876051e208b624fc3be25ba34452030"} Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.918417 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d","Type":"ContainerDied","Data":"e5bf8c6230a3cf28cb4d6810d400ab586125f96f1e1d8e1e052c5ad5a57074e9"} Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.918793 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-zb9jc"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.919035 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" podUID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerName="dnsmasq-dns" containerID="cri-o://6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847" gracePeriod=10 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.934401 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-zwzx4"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.942481 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-zwzx4"] Feb 21 07:09:57 crc kubenswrapper[4820]: E0221 07:09:57.955563 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod455bfe0a_a135_4900_8b15_ce584dc8a5bb.slice/crio-763a6b46ea0010465aaf5a12dc0a5759f78313371c19cbeb4189a6c04b0f99d4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf0c3ff8_e36f_4539_a7da_9d2b1e7a146d.slice/crio-conmon-e5bf8c6230a3cf28cb4d6810d400ab586125f96f1e1d8e1e052c5ad5a57074e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef3827c2_ee55_4f86_a752_d7cbc9c6454e.slice/crio-conmon-d451738c8f6f4e609144531dffaae738937778e3a27f1cdf9e62e3a7d1480b96.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96d07086_c2e8_4351_bac8_b99c485826c4.slice/crio-conmon-4674ea514756bc9a67ce3b0d32627dbc628c1c0dbddfbaace5ee5ef4c003c5ce.scope\": RecentStats: unable to find data in memory cache]" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.983831 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-k9s8t"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.031309 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.031885 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-server" containerID="cri-o://ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032267 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="swift-recon-cron" containerID="cri-o://4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032325 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="rsync" containerID="cri-o://697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032369 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-expirer" containerID="cri-o://adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032411 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-updater" containerID="cri-o://b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032449 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-auditor" containerID="cri-o://15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032479 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-replicator" containerID="cri-o://143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032513 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-server" containerID="cri-o://1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032545 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-updater" containerID="cri-o://5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032579 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-auditor" containerID="cri-o://c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032620 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-replicator" containerID="cri-o://956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032654 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-server" containerID="cri-o://472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032690 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-replicator" containerID="cri-o://8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032688 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-auditor" containerID="cri-o://6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032807 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-reaper" containerID="cri-o://3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.063264 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-k9s8t"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.075185 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7796b97765-sqvtc"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.075432 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7796b97765-sqvtc" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-api" containerID="cri-o://cbde025c9fa7d22d168b54e6b8a411d4937140bd66d43a2f8ef9982aa91aa117" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.075544 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7796b97765-sqvtc" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-httpd" containerID="cri-o://89a677ab22f4bcd7551d19abb1edd151c1367901214a3d624d55bc1c5a3aa903" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.105476 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-cd19-account-create-update-77csv"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.128545 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85cb846b98-bwgbn"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.129055 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-85cb846b98-bwgbn" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-log" containerID="cri-o://eafd72d9e7eb9455c63fe46ce3b813c939d82e75512da868bf318e1592ef0443" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.129463 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-85cb846b98-bwgbn" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-api" containerID="cri-o://2888304fe149a4652cef0ecaece438bfd7d58f18a6fbf5e65f2e3c959991183b" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.155138 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-jng5b"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.171070 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6976-account-create-update-mzpt2"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.184971 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-jng5b"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.235570 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6976-account-create-update-mzpt2"] Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.259291 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.259369 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data podName:8b1242f9-d2ac-493c-bc89-43f7be597a75 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:59.259349617 +0000 UTC m=+1374.292433815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data") pod "rabbitmq-server-0" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75") : configmap "rabbitmq-config-data" not found Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.286029 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lnssq"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.311417 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" containerID="cri-o://d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.332431 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lnssq"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.378058 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c516-account-create-update-vrfb9"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.393394 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.401130 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-j8m4b"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.409206 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-j8m4b"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.413176 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_455bfe0a-a135-4900-8b15-ce584dc8a5bb/ovsdbserver-sb/0.log" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.413284 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.416433 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.425972 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.426227 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-log" containerID="cri-o://4cf28ea16018fb755adbd8f5f3ce5ec56799e0bc139946346840132dd9f3b8c1" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.426374 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-metadata" containerID="cri-o://21769d7e4b9a4ff09d20e68b3668dbde7c57ce716fc232f4365f9370127b9d52" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.445569 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c8ba-account-create-update-4wwws"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.458403 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-w9fxb"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.467388 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-w9fxb"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.480007 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.486395 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4e9a-account-create-update-4996c"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.497875 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.498126 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-log" containerID="cri-o://23c184a5e245f5facd743c3a7e6bea11c07b828a4d25451cb2550eaa44349110" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.498283 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-api" containerID="cri-o://841b7a62d1e6b92cb6679a13f353ab7adf29630b1c91e4ad2d0c98c9562682d7" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.506942 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-p2v97_96d07086-c2e8-4351-bac8-b99c485826c4/openstack-network-exporter/0.log" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.507359 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.521835 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7b6747758b-gs56z"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.530821 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-867cbf55-jx754"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.531071 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-867cbf55-jx754" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker-log" containerID="cri-o://cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.531259 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-867cbf55-jx754" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker" containerID="cri-o://53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.542403 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-79b8cb94b4-h6tqh"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.542711 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener-log" containerID="cri-o://3778b0182306b15cbf9e09e147e68dd7624053483e32182b3d2bbe64c15bf395" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.542829 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener" containerID="cri-o://df3a8b6f8128140f50c80025c22d3b291ab89d34796d0307384acb7c6dbbcc96" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579017 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579150 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-scripts\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579208 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-combined-ca-bundle\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579277 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-metrics-certs-tls-certs\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579466 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdbserver-sb-tls-certs\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579521 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs4qb\" (UniqueName: \"kubernetes.io/projected/455bfe0a-a135-4900-8b15-ce584dc8a5bb-kube-api-access-fs4qb\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579616 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdb-rundir\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579668 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-config\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.582045 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-config" (OuterVolumeSpecName: "config") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.583841 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.587311 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-scripts" (OuterVolumeSpecName: "scripts") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.589257 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.607947 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/455bfe0a-a135-4900-8b15-ce584dc8a5bb-kube-api-access-fs4qb" (OuterVolumeSpecName: "kube-api-access-fs4qb") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "kube-api-access-fs4qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.616523 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_df0c3ff8-e36f-4539-a7da-9d2b1e7a146d/ovsdbserver-nb/0.log" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.616716 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-67dd4454fc-lr4lq"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.618230 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.634724 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="fa49984a-9511-4449-adc6-997899961f73" containerName="rabbitmq" containerID="cri-o://7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078" gracePeriod=604800 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.634845 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-b68n2"] Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.634911 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.637610 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.637689 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="ovn-northd" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.644039 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.655294 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.657161 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0ca75969-e299-435a-a607-d470d4ab831e" containerName="nova-scheduler-scheduler" containerID="cri-o://f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.668824 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-b68n2"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.684678 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d07086-c2e8-4351-bac8-b99c485826c4-config\") pod \"96d07086-c2e8-4351-bac8-b99c485826c4\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.684831 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovn-rundir\") pod \"96d07086-c2e8-4351-bac8-b99c485826c4\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.684872 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7xps\" (UniqueName: \"kubernetes.io/projected/96d07086-c2e8-4351-bac8-b99c485826c4-kube-api-access-m7xps\") pod \"96d07086-c2e8-4351-bac8-b99c485826c4\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.684906 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-metrics-certs-tls-certs\") pod \"96d07086-c2e8-4351-bac8-b99c485826c4\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.685072 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovs-rundir\") pod \"96d07086-c2e8-4351-bac8-b99c485826c4\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.685093 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-combined-ca-bundle\") pod \"96d07086-c2e8-4351-bac8-b99c485826c4\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.685838 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96d07086-c2e8-4351-bac8-b99c485826c4-config" (OuterVolumeSpecName: "config") pod "96d07086-c2e8-4351-bac8-b99c485826c4" (UID: "96d07086-c2e8-4351-bac8-b99c485826c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.685896 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "96d07086-c2e8-4351-bac8-b99c485826c4" (UID: "96d07086-c2e8-4351-bac8-b99c485826c4"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.685942 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "96d07086-c2e8-4351-bac8-b99c485826c4" (UID: "96d07086-c2e8-4351-bac8-b99c485826c4"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688181 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs4qb\" (UniqueName: \"kubernetes.io/projected/455bfe0a-a135-4900-8b15-ce584dc8a5bb-kube-api-access-fs4qb\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688514 4820 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovs-rundir\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688524 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d07086-c2e8-4351-bac8-b99c485826c4-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688533 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688543 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688551 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688572 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688581 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.694678 4820 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 21 07:09:58 crc kubenswrapper[4820]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 21 07:09:58 crc kubenswrapper[4820]: + source /usr/local/bin/container-scripts/functions Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNBridge=br-int Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNRemote=tcp:localhost:6642 Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNEncapType=geneve Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNAvailabilityZones= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ EnableChassisAsGateway=true Feb 21 07:09:58 crc kubenswrapper[4820]: ++ PhysicalNetworks= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNHostName= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 21 07:09:58 crc kubenswrapper[4820]: ++ ovs_dir=/var/lib/openvswitch Feb 21 07:09:58 crc kubenswrapper[4820]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 21 07:09:58 crc kubenswrapper[4820]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 21 07:09:58 crc kubenswrapper[4820]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + sleep 0.5 Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + sleep 0.5 Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + cleanup_ovsdb_server_semaphore Feb 21 07:09:58 crc kubenswrapper[4820]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 21 07:09:58 crc kubenswrapper[4820]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 21 07:09:58 crc kubenswrapper[4820]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-rwsk7" message=< Feb 21 07:09:58 crc kubenswrapper[4820]: Exiting ovsdb-server (5) [ OK ] Feb 21 07:09:58 crc kubenswrapper[4820]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 21 07:09:58 crc kubenswrapper[4820]: + source /usr/local/bin/container-scripts/functions Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNBridge=br-int Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNRemote=tcp:localhost:6642 Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNEncapType=geneve Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNAvailabilityZones= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ EnableChassisAsGateway=true Feb 21 07:09:58 crc kubenswrapper[4820]: ++ PhysicalNetworks= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNHostName= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 21 07:09:58 crc kubenswrapper[4820]: ++ ovs_dir=/var/lib/openvswitch Feb 21 07:09:58 crc kubenswrapper[4820]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 21 07:09:58 crc kubenswrapper[4820]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 21 07:09:58 crc kubenswrapper[4820]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + sleep 0.5 Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + sleep 0.5 Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + cleanup_ovsdb_server_semaphore Feb 21 07:09:58 crc kubenswrapper[4820]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 21 07:09:58 crc kubenswrapper[4820]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 21 07:09:58 crc kubenswrapper[4820]: > Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.694720 4820 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 21 07:09:58 crc kubenswrapper[4820]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 21 07:09:58 crc kubenswrapper[4820]: + source /usr/local/bin/container-scripts/functions Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNBridge=br-int Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNRemote=tcp:localhost:6642 Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNEncapType=geneve Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNAvailabilityZones= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ EnableChassisAsGateway=true Feb 21 07:09:58 crc kubenswrapper[4820]: ++ PhysicalNetworks= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNHostName= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 21 07:09:58 crc kubenswrapper[4820]: ++ ovs_dir=/var/lib/openvswitch Feb 21 07:09:58 crc kubenswrapper[4820]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 21 07:09:58 crc kubenswrapper[4820]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 21 07:09:58 crc kubenswrapper[4820]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + sleep 0.5 Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + sleep 0.5 Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + cleanup_ovsdb_server_semaphore Feb 21 07:09:58 crc kubenswrapper[4820]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 21 07:09:58 crc kubenswrapper[4820]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 21 07:09:58 crc kubenswrapper[4820]: > pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" containerID="cri-o://355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.694751 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" containerID="cri-o://355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" gracePeriod=29 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.703830 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vdzvw"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.704253 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d07086-c2e8-4351-bac8-b99c485826c4-kube-api-access-m7xps" (OuterVolumeSpecName: "kube-api-access-m7xps") pod "96d07086-c2e8-4351-bac8-b99c485826c4" (UID: "96d07086-c2e8-4351-bac8-b99c485826c4"). InnerVolumeSpecName "kube-api-access-m7xps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.713535 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-pjnhh"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.720427 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vdzvw"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.727948 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76b79c9766-s694g"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.728370 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76b79c9766-s694g" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api-log" containerID="cri-o://d5d4ebfd3d862ab82dd24efdb0236db9cf326c55f3fab0e5ba28750a426c7f68" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.728935 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76b79c9766-s694g" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api" containerID="cri-o://84344b3d5ae53a06ac9828132a33cafdbcfdeafdabeded21cd72b5eb2ec97792" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.733764 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a80b-account-create-update-w6rwf"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.742378 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.742616 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="7b1db760-d9fc-477f-bc0b-8119d247253b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://24941eaa5fcba668b44518933915d73aa568096044e3c4ed1b1d3b36fe63bafd" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.775481 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-pjnhh"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794284 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-config\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794325 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-scripts\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794387 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794438 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-combined-ca-bundle\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794533 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-metrics-certs-tls-certs\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794686 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdb-rundir\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794752 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdbserver-nb-tls-certs\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794803 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwpjj\" (UniqueName: \"kubernetes.io/projected/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-kube-api-access-lwpjj\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.795360 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7xps\" (UniqueName: \"kubernetes.io/projected/96d07086-c2e8-4351-bac8-b99c485826c4-kube-api-access-m7xps\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.796723 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.799387 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-scripts" (OuterVolumeSpecName: "scripts") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.800114 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-config" (OuterVolumeSpecName: "config") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.805433 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.814711 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6rxdc"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.822303 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6rxdc"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.828790 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.828992 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="061bac4c-22ff-4144-b114-133ea89494c8" containerName="nova-cell1-conductor-conductor" containerID="cri-o://4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.832035 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.859497 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-kube-api-access-lwpjj" (OuterVolumeSpecName: "kube-api-access-lwpjj") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "kube-api-access-lwpjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.860429 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bd4bz"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.863442 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bd4bz"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.882665 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.897909 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.897946 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwpjj\" (UniqueName: \"kubernetes.io/projected/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-kube-api-access-lwpjj\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.897957 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.897968 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.897993 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.909081 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.909341 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.909625 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="8c841249-7293-4826-b05f-e4a189aaef07" containerName="nova-cell0-conductor-conductor" containerID="cri-o://498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.916977 4820 generic.go:334] "Generic (PLEG): container finished" podID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerID="d451738c8f6f4e609144531dffaae738937778e3a27f1cdf9e62e3a7d1480b96" exitCode=143 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.917036 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3827c2-ee55-4f86-a752-d7cbc9c6454e","Type":"ContainerDied","Data":"d451738c8f6f4e609144531dffaae738937778e3a27f1cdf9e62e3a7d1480b96"} Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.928069 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bcvpx"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.963183 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerID="6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847" exitCode=0 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.963339 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.964621 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 21 07:09:58 crc kubenswrapper[4820]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 21 07:09:58 crc kubenswrapper[4820]: Feb 21 07:09:58 crc kubenswrapper[4820]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 21 07:09:58 crc kubenswrapper[4820]: Feb 21 07:09:58 crc kubenswrapper[4820]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 21 07:09:58 crc kubenswrapper[4820]: Feb 21 07:09:58 crc kubenswrapper[4820]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 21 07:09:58 crc kubenswrapper[4820]: Feb 21 07:09:58 crc kubenswrapper[4820]: if [ -n "glance" ]; then Feb 21 07:09:58 crc kubenswrapper[4820]: GRANT_DATABASE="glance" Feb 21 07:09:58 crc kubenswrapper[4820]: else Feb 21 07:09:58 crc kubenswrapper[4820]: GRANT_DATABASE="*" Feb 21 07:09:58 crc kubenswrapper[4820]: fi Feb 21 07:09:58 crc kubenswrapper[4820]: Feb 21 07:09:58 crc kubenswrapper[4820]: # going for maximum compatibility here: Feb 21 07:09:58 crc kubenswrapper[4820]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 21 07:09:58 crc kubenswrapper[4820]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 21 07:09:58 crc kubenswrapper[4820]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 21 07:09:58 crc kubenswrapper[4820]: # support updates Feb 21 07:09:58 crc kubenswrapper[4820]: Feb 21 07:09:58 crc kubenswrapper[4820]: $MYSQL_CMD < logger="UnhandledError" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.964786 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" event={"ID":"dc228462-9ac8-475c-859b-bbce5678a5ea","Type":"ContainerDied","Data":"6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847"} Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.964827 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" event={"ID":"dc228462-9ac8-475c-859b-bbce5678a5ea","Type":"ContainerDied","Data":"d947f700f97ac52cf725cd42cbfb548fb57f713a94e9ecafa0ce141427736451"} Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.964844 4820 scope.go:117] "RemoveContainer" containerID="6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847" Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.966572 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-cd19-account-create-update-77csv" podUID="95200e0a-ca93-4303-80af-8b950ddc8746" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.004405 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-cd19-account-create-update-77csv"] Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.007542 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-sb\") pod \"dc228462-9ac8-475c-859b-bbce5678a5ea\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.007617 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7dh7\" (UniqueName: \"kubernetes.io/projected/dc228462-9ac8-475c-859b-bbce5678a5ea-kube-api-access-c7dh7\") pod \"dc228462-9ac8-475c-859b-bbce5678a5ea\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.007647 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-swift-storage-0\") pod \"dc228462-9ac8-475c-859b-bbce5678a5ea\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.007832 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-svc\") pod \"dc228462-9ac8-475c-859b-bbce5678a5ea\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.007861 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-config\") pod \"dc228462-9ac8-475c-859b-bbce5678a5ea\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.007892 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-nb\") pod \"dc228462-9ac8-475c-859b-bbce5678a5ea\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.008670 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.012719 4820 generic.go:334] "Generic (PLEG): container finished" podID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerID="c89955e8456635f9567d07ebef7a2fae175b713a07f50ea3684f6959998a79da" exitCode=143 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.012799 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff","Type":"ContainerDied","Data":"c89955e8456635f9567d07ebef7a2fae175b713a07f50ea3684f6959998a79da"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.027158 4820 generic.go:334] "Generic (PLEG): container finished" podID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerID="275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.027250 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e533e163-2ccc-4468-9083-c9bf711b0dfb","Type":"ContainerDied","Data":"275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074"} Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.041835 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.043521 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.045337 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerID="eafd72d9e7eb9455c63fe46ce3b813c939d82e75512da868bf318e1592ef0443" exitCode=143 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.045430 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85cb846b98-bwgbn" event={"ID":"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe","Type":"ContainerDied","Data":"eafd72d9e7eb9455c63fe46ce3b813c939d82e75512da868bf318e1592ef0443"} Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.045346 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.048768 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0ca75969-e299-435a-a607-d470d4ab831e" containerName="nova-scheduler-scheduler" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.058729 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_455bfe0a-a135-4900-8b15-ce584dc8a5bb/ovsdbserver-sb/0.log" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.058884 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerName="galera" containerID="cri-o://437b9754b509c1466ba129e34883f39fc42e43b2b7d6fb57366f35e57d0c3b25" gracePeriod=30 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.058897 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.058909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"455bfe0a-a135-4900-8b15-ce584dc8a5bb","Type":"ContainerDied","Data":"a5054f534bcacef82cd1fa270668d60a62e37baeb241caf361f2e16ba9351a1e"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.063523 4820 generic.go:334] "Generic (PLEG): container finished" podID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerID="89a677ab22f4bcd7551d19abb1edd151c1367901214a3d624d55bc1c5a3aa903" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.063641 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7796b97765-sqvtc" event={"ID":"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d","Type":"ContainerDied","Data":"89a677ab22f4bcd7551d19abb1edd151c1367901214a3d624d55bc1c5a3aa903"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.065886 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-p2v97_96d07086-c2e8-4351-bac8-b99c485826c4/openstack-network-exporter/0.log" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.066082 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.066492 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p2v97" event={"ID":"96d07086-c2e8-4351-bac8-b99c485826c4","Type":"ContainerDied","Data":"7d34608592e5bad3ce2cdbb838b7f2d91070fccc15c351f0f966dcae95c21a16"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.068196 4820 generic.go:334] "Generic (PLEG): container finished" podID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerID="4cf28ea16018fb755adbd8f5f3ce5ec56799e0bc139946346840132dd9f3b8c1" exitCode=143 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.068346 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a112132d-4a29-460c-985d-b0ca2ddb1aa6","Type":"ContainerDied","Data":"4cf28ea16018fb755adbd8f5f3ce5ec56799e0bc139946346840132dd9f3b8c1"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.072631 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc228462-9ac8-475c-859b-bbce5678a5ea-kube-api-access-c7dh7" (OuterVolumeSpecName: "kube-api-access-c7dh7") pod "dc228462-9ac8-475c-859b-bbce5678a5ea" (UID: "dc228462-9ac8-475c-859b-bbce5678a5ea"). InnerVolumeSpecName "kube-api-access-c7dh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076315 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076484 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076620 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076731 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076825 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076897 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076970 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.077039 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.077096 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.077176 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.077254 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.077330 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.077935 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.079291 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96d07086-c2e8-4351-bac8-b99c485826c4" (UID: "96d07086-c2e8-4351-bac8-b99c485826c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.079191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080328 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-cffb45b79-w6bp8"] Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080412 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080489 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080543 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080610 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080669 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080786 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080839 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080922 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.081148 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-cffb45b79-w6bp8" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-httpd" containerID="cri-o://a7985c1e46addff2bf4510896c079d9be02b4a1acfa0993dfb445f66ebd5f38f" gracePeriod=30 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.081554 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-cffb45b79-w6bp8" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-server" containerID="cri-o://974657f758f342af6918d1323b07f9c2cdb0b997d3d6058cb1ab6f19ab1ef80b" gracePeriod=30 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.087188 4820 generic.go:334] "Generic (PLEG): container finished" podID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerID="3778b0182306b15cbf9e09e147e68dd7624053483e32182b3d2bbe64c15bf395" exitCode=143 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.087304 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" event={"ID":"6dbc8f44-c54c-42c0-8430-742c6bb61165","Type":"ContainerDied","Data":"3778b0182306b15cbf9e09e147e68dd7624053483e32182b3d2bbe64c15bf395"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.096622 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-67dd4454fc-lr4lq"] Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.098662 4820 generic.go:334] "Generic (PLEG): container finished" podID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerID="23c184a5e245f5facd743c3a7e6bea11c07b828a4d25451cb2550eaa44349110" exitCode=143 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.098821 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e16d52c-9322-49cf-9948-8d1c56c0a5ed","Type":"ContainerDied","Data":"23c184a5e245f5facd743c3a7e6bea11c07b828a4d25451cb2550eaa44349110"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.106841 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bcvpx" event={"ID":"73b1b012-98c9-49cf-852d-a2ff95b746cf","Type":"ContainerStarted","Data":"e23762ffd7ce106b9f82fdb1d0d30eef475c43de4d355359cab19dd81674c400"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.110270 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.111346 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7dh7\" (UniqueName: \"kubernetes.io/projected/dc228462-9ac8-475c-859b-bbce5678a5ea-kube-api-access-c7dh7\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.123604 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerName="rabbitmq" containerID="cri-o://0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f" gracePeriod=604800 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.147770 4820 generic.go:334] "Generic (PLEG): container finished" podID="d7d6374d-1595-4586-b161-d199a2b39068" containerID="909cf351ee5d3a426633b14e5a872b68e1e1f2b2e35b195ce445cb68523c8342" exitCode=137 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.184192 4820 generic.go:334] "Generic (PLEG): container finished" podID="4709782f-54e7-4a78-a56e-8f58a5556501" containerID="d5d4ebfd3d862ab82dd24efdb0236db9cf326c55f3fab0e5ba28750a426c7f68" exitCode=143 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.184273 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b79c9766-s694g" event={"ID":"4709782f-54e7-4a78-a56e-8f58a5556501","Type":"ContainerDied","Data":"d5d4ebfd3d862ab82dd24efdb0236db9cf326c55f3fab0e5ba28750a426c7f68"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.189067 4820 generic.go:334] "Generic (PLEG): container finished" podID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.189129 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rwsk7" event={"ID":"7880da24-89a6-4428-b9c1-5ffe6647af01","Type":"ContainerDied","Data":"355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.192602 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_df0c3ff8-e36f-4539-a7da-9d2b1e7a146d/ovsdbserver-nb/0.log" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.192724 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.192775 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d","Type":"ContainerDied","Data":"b52687043d29455f8c5ffa92bb3e6d7984a2979aaab8cd8cfdef30f5b4f361f2"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.194823 4820 generic.go:334] "Generic (PLEG): container finished" podID="f42ba382-9e03-4f39-904e-87f4d764175c" containerID="cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d" exitCode=143 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.194866 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-867cbf55-jx754" event={"ID":"f42ba382-9e03-4f39-904e-87f4d764175c","Type":"ContainerDied","Data":"cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.242456 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.257868 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.276063 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.280988 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dc228462-9ac8-475c-859b-bbce5678a5ea" (UID: "dc228462-9ac8-475c-859b-bbce5678a5ea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.320806 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.320845 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.320857 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.320869 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.320919 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.320999 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data podName:8b1242f9-d2ac-493c-bc89-43f7be597a75 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:01.3209811 +0000 UTC m=+1376.354065298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data") pod "rabbitmq-server-0" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75") : configmap "rabbitmq-config-data" not found Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.323069 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.344077 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="fa49984a-9511-4449-adc6-997899961f73" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.350680 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dc228462-9ac8-475c-859b-bbce5678a5ea" (UID: "dc228462-9ac8-475c-859b-bbce5678a5ea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.357047 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dc228462-9ac8-475c-859b-bbce5678a5ea" (UID: "dc228462-9ac8-475c-859b-bbce5678a5ea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.374278 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-config" (OuterVolumeSpecName: "config") pod "dc228462-9ac8-475c-859b-bbce5678a5ea" (UID: "dc228462-9ac8-475c-859b-bbce5678a5ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.383285 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc228462-9ac8-475c-859b-bbce5678a5ea" (UID: "dc228462-9ac8-475c-859b-bbce5678a5ea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.384755 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.385492 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.388519 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "96d07086-c2e8-4351-bac8-b99c485826c4" (UID: "96d07086-c2e8-4351-bac8-b99c485826c4"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.408933 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.411394 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.412866 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.412930 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="061bac4c-22ff-4144-b114-133ea89494c8" containerName="nova-cell1-conductor-conductor" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422866 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422904 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422918 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422929 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422939 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422949 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422961 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422973 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.431799 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.516735 4820 scope.go:117] "RemoveContainer" containerID="c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.525481 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.549606 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.603587 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.613223 4820 scope.go:117] "RemoveContainer" containerID="6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.614231 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847\": container with ID starting with 6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847 not found: ID does not exist" containerID="6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.614277 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847"} err="failed to get container status \"6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847\": rpc error: code = NotFound desc = could not find container \"6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847\": container with ID starting with 6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847 not found: ID does not exist" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.614296 4820 scope.go:117] "RemoveContainer" containerID="c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.614729 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61\": container with ID starting with c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61 not found: ID does not exist" containerID="c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.614753 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61"} err="failed to get container status \"c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61\": rpc error: code = NotFound desc = could not find container \"c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61\": container with ID starting with c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61 not found: ID does not exist" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.614766 4820 scope.go:117] "RemoveContainer" containerID="087725d49d3eda013af8b6833f156a663fa05bd1ae58e6cd6c97f96a9a387f5e" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.626283 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config\") pod \"d7d6374d-1595-4586-b161-d199a2b39068\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.626406 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config-secret\") pod \"d7d6374d-1595-4586-b161-d199a2b39068\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.629619 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-combined-ca-bundle\") pod \"d7d6374d-1595-4586-b161-d199a2b39068\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.629660 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x8xc\" (UniqueName: \"kubernetes.io/projected/d7d6374d-1595-4586-b161-d199a2b39068-kube-api-access-7x8xc\") pod \"d7d6374d-1595-4586-b161-d199a2b39068\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.654737 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.671914 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.672715 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7d6374d-1595-4586-b161-d199a2b39068-kube-api-access-7x8xc" (OuterVolumeSpecName: "kube-api-access-7x8xc") pod "d7d6374d-1595-4586-b161-d199a2b39068" (UID: "d7d6374d-1595-4586-b161-d199a2b39068"). InnerVolumeSpecName "kube-api-access-7x8xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.679322 4820 scope.go:117] "RemoveContainer" containerID="763a6b46ea0010465aaf5a12dc0a5759f78313371c19cbeb4189a6c04b0f99d4" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.765531 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x8xc\" (UniqueName: \"kubernetes.io/projected/d7d6374d-1595-4586-b161-d199a2b39068-kube-api-access-7x8xc\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.765680 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.765748 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data podName:fa49984a-9511-4449-adc6-997899961f73 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:03.765721243 +0000 UTC m=+1378.798805451 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data") pod "rabbitmq-cell1-server-0" (UID: "fa49984a-9511-4449-adc6-997899961f73") : configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.796913 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 21 07:09:59 crc kubenswrapper[4820]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: if [ -n "placement" ]; then Feb 21 07:09:59 crc kubenswrapper[4820]: GRANT_DATABASE="placement" Feb 21 07:09:59 crc kubenswrapper[4820]: else Feb 21 07:09:59 crc kubenswrapper[4820]: GRANT_DATABASE="*" Feb 21 07:09:59 crc kubenswrapper[4820]: fi Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: # going for maximum compatibility here: Feb 21 07:09:59 crc kubenswrapper[4820]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 21 07:09:59 crc kubenswrapper[4820]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 21 07:09:59 crc kubenswrapper[4820]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 21 07:09:59 crc kubenswrapper[4820]: # support updates Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: $MYSQL_CMD < logger="UnhandledError" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.797514 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 21 07:09:59 crc kubenswrapper[4820]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: if [ -n "neutron" ]; then Feb 21 07:09:59 crc kubenswrapper[4820]: GRANT_DATABASE="neutron" Feb 21 07:09:59 crc kubenswrapper[4820]: else Feb 21 07:09:59 crc kubenswrapper[4820]: GRANT_DATABASE="*" Feb 21 07:09:59 crc kubenswrapper[4820]: fi Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: # going for maximum compatibility here: Feb 21 07:09:59 crc kubenswrapper[4820]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 21 07:09:59 crc kubenswrapper[4820]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 21 07:09:59 crc kubenswrapper[4820]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 21 07:09:59 crc kubenswrapper[4820]: # support updates Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: $MYSQL_CMD < logger="UnhandledError" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.806691 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-c8ba-account-create-update-4wwws" podUID="0fa0449e-f842-4605-b814-1e7ede08a5b7" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.806798 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-c516-account-create-update-vrfb9" podUID="67b282c5-1012-4188-bc31-b8e7e794bb77" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.835834 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.836091 4820 scope.go:117] "RemoveContainer" containerID="4674ea514756bc9a67ce3b0d32627dbc628c1c0dbddfbaace5ee5ef4c003c5ce" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.836532 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 21 07:09:59 crc kubenswrapper[4820]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: if [ -n "barbican" ]; then Feb 21 07:09:59 crc kubenswrapper[4820]: GRANT_DATABASE="barbican" Feb 21 07:09:59 crc kubenswrapper[4820]: else Feb 21 07:09:59 crc kubenswrapper[4820]: GRANT_DATABASE="*" Feb 21 07:09:59 crc kubenswrapper[4820]: fi Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: # going for maximum compatibility here: Feb 21 07:09:59 crc kubenswrapper[4820]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 21 07:09:59 crc kubenswrapper[4820]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 21 07:09:59 crc kubenswrapper[4820]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 21 07:09:59 crc kubenswrapper[4820]: # support updates Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: $MYSQL_CMD < logger="UnhandledError" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.838111 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-4e9a-account-create-update-4996c" podUID="6fbdfb60-d58f-4949-a33c-f17e9ea2cd05" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.868765 4820 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.868810 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data podName:5916b629-5e69-4ad3-9180-c07181d3ff37 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:03.868796385 +0000 UTC m=+1378.901880583 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data") pod "barbican-keystone-listener-7b6747758b-gs56z" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37") : secret "barbican-config-data" not found Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.869777 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06da7378-1c64-43e9-8d97-63a92fe503fc" path="/var/lib/kubelet/pods/06da7378-1c64-43e9-8d97-63a92fe503fc/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.871023 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10d96043-ca9d-4dd0-aa3e-8bcd5941a97b" path="/var/lib/kubelet/pods/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.871696 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b3f478b-4142-46b8-a9ca-603e9e1860ac" path="/var/lib/kubelet/pods/1b3f478b-4142-46b8-a9ca-603e9e1860ac/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.872438 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa19e90-7854-4eb9-9b72-26c8d0739851" path="/var/lib/kubelet/pods/1fa19e90-7854-4eb9-9b72-26c8d0739851/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.886083 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d7d6374d-1595-4586-b161-d199a2b39068" (UID: "d7d6374d-1595-4586-b161-d199a2b39068"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.888517 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7d6374d-1595-4586-b161-d199a2b39068" (UID: "d7d6374d-1595-4586-b161-d199a2b39068"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.900150 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="324a15c6-a903-420b-8db4-4268008c83d1" path="/var/lib/kubelet/pods/324a15c6-a903-420b-8db4-4268008c83d1/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.901131 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b1b4a37-bb80-4c59-acdc-b6490c6e6c44" path="/var/lib/kubelet/pods/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.902175 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f798ecc-7cdf-4b7b-b8c9-0754d3391676" path="/var/lib/kubelet/pods/3f798ecc-7cdf-4b7b-b8c9-0754d3391676/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.903640 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b9dd869-f673-4077-b345-05b4e79eb590" path="/var/lib/kubelet/pods/4b9dd869-f673-4077-b345-05b4e79eb590/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.905738 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d96a68b-1b90-4fcd-9716-679be14d3157" path="/var/lib/kubelet/pods/4d96a68b-1b90-4fcd-9716-679be14d3157/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.918901 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7e07b2-8561-41da-9c7f-ea5d80280d0a" path="/var/lib/kubelet/pods/9f7e07b2-8561-41da-9c7f-ea5d80280d0a/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.931088 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf044875-b3ef-48f5-b802-1bd167de5685" path="/var/lib/kubelet/pods/cf044875-b3ef-48f5-b802-1bd167de5685/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.932084 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d69a9369-affe-4441-bf33-3c0f13540875" path="/var/lib/kubelet/pods/d69a9369-affe-4441-bf33-3c0f13540875/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.935849 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" path="/var/lib/kubelet/pods/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.936627 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1974d89-b3a1-4cc5-b113-fb39248e5bf0" path="/var/lib/kubelet/pods/e1974d89-b3a1-4cc5-b113-fb39248e5bf0/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.943030 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e610e477-7d95-4af5-be48-f8a9acd81d6a" path="/var/lib/kubelet/pods/e610e477-7d95-4af5-be48-f8a9acd81d6a/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.961954 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d7d6374d-1595-4586-b161-d199a2b39068" (UID: "d7d6374d-1595-4586-b161-d199a2b39068"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.969298 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-scripts\") pod \"e533e163-2ccc-4468-9083-c9bf711b0dfb\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.969621 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e533e163-2ccc-4468-9083-c9bf711b0dfb-etc-machine-id\") pod \"e533e163-2ccc-4468-9083-c9bf711b0dfb\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.969827 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data-custom\") pod \"e533e163-2ccc-4468-9083-c9bf711b0dfb\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.970040 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkbbn\" (UniqueName: \"kubernetes.io/projected/e533e163-2ccc-4468-9083-c9bf711b0dfb-kube-api-access-vkbbn\") pod \"e533e163-2ccc-4468-9083-c9bf711b0dfb\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.970164 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data\") pod \"e533e163-2ccc-4468-9083-c9bf711b0dfb\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.970290 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-combined-ca-bundle\") pod \"e533e163-2ccc-4468-9083-c9bf711b0dfb\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.973838 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.974115 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.974126 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.977356 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e533e163-2ccc-4468-9083-c9bf711b0dfb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e533e163-2ccc-4468-9083-c9bf711b0dfb" (UID: "e533e163-2ccc-4468-9083-c9bf711b0dfb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.979419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e533e163-2ccc-4468-9083-c9bf711b0dfb" (UID: "e533e163-2ccc-4468-9083-c9bf711b0dfb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.982159 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-scripts" (OuterVolumeSpecName: "scripts") pod "e533e163-2ccc-4468-9083-c9bf711b0dfb" (UID: "e533e163-2ccc-4468-9083-c9bf711b0dfb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.982433 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e533e163-2ccc-4468-9083-c9bf711b0dfb-kube-api-access-vkbbn" (OuterVolumeSpecName: "kube-api-access-vkbbn") pod "e533e163-2ccc-4468-9083-c9bf711b0dfb" (UID: "e533e163-2ccc-4468-9083-c9bf711b0dfb"). InnerVolumeSpecName "kube-api-access-vkbbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:00 crc kubenswrapper[4820]: E0221 07:10:00.009983 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:10:00 crc kubenswrapper[4820]: E0221 07:10:00.024968 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:10:00 crc kubenswrapper[4820]: E0221 07:10:00.035948 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:10:00 crc kubenswrapper[4820]: E0221 07:10:00.036022 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="8c841249-7293-4826-b05f-e4a189aaef07" containerName="nova-cell0-conductor-conductor" Feb 21 07:10:00 crc kubenswrapper[4820]: E0221 07:10:00.072540 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 21 07:10:00 crc kubenswrapper[4820]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 21 07:10:00 crc kubenswrapper[4820]: Feb 21 07:10:00 crc kubenswrapper[4820]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 21 07:10:00 crc kubenswrapper[4820]: Feb 21 07:10:00 crc kubenswrapper[4820]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 21 07:10:00 crc kubenswrapper[4820]: Feb 21 07:10:00 crc kubenswrapper[4820]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 21 07:10:00 crc kubenswrapper[4820]: Feb 21 07:10:00 crc kubenswrapper[4820]: if [ -n "nova_api" ]; then Feb 21 07:10:00 crc kubenswrapper[4820]: GRANT_DATABASE="nova_api" Feb 21 07:10:00 crc kubenswrapper[4820]: else Feb 21 07:10:00 crc kubenswrapper[4820]: GRANT_DATABASE="*" Feb 21 07:10:00 crc kubenswrapper[4820]: fi Feb 21 07:10:00 crc kubenswrapper[4820]: Feb 21 07:10:00 crc kubenswrapper[4820]: # going for maximum compatibility here: Feb 21 07:10:00 crc kubenswrapper[4820]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 21 07:10:00 crc kubenswrapper[4820]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 21 07:10:00 crc kubenswrapper[4820]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 21 07:10:00 crc kubenswrapper[4820]: # support updates Feb 21 07:10:00 crc kubenswrapper[4820]: Feb 21 07:10:00 crc kubenswrapper[4820]: $MYSQL_CMD < logger="UnhandledError" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.075403 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e533e163-2ccc-4468-9083-c9bf711b0dfb" (UID: "e533e163-2ccc-4468-9083-c9bf711b0dfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:00 crc kubenswrapper[4820]: E0221 07:10:00.075923 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-a80b-account-create-update-w6rwf" podUID="ed145514-af37-491d-bc62-2f84273b4fd0" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.079602 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.079632 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkbbn\" (UniqueName: \"kubernetes.io/projected/e533e163-2ccc-4468-9083-c9bf711b0dfb-kube-api-access-vkbbn\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.079644 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.079652 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.079661 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e533e163-2ccc-4468-9083-c9bf711b0dfb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.112667 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data" (OuterVolumeSpecName: "config-data") pod "e533e163-2ccc-4468-9083-c9bf711b0dfb" (UID: "e533e163-2ccc-4468-9083-c9bf711b0dfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.181918 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.212283 4820 generic.go:334] "Generic (PLEG): container finished" podID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerID="437b9754b509c1466ba129e34883f39fc42e43b2b7d6fb57366f35e57d0c3b25" exitCode=0 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.309018 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7" exitCode=0 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.309190 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80" exitCode=0 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.310658 4820 generic.go:334] "Generic (PLEG): container finished" podID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerID="134fddb7ed13f71efcb8a67bce858e36224f138e4b68654fc6cd13c721b456f5" exitCode=1 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.311433 4820 scope.go:117] "RemoveContainer" containerID="134fddb7ed13f71efcb8a67bce858e36224f138e4b68654fc6cd13c721b456f5" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.336162 4820 generic.go:334] "Generic (PLEG): container finished" podID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerID="974657f758f342af6918d1323b07f9c2cdb0b997d3d6058cb1ab6f19ab1ef80b" exitCode=0 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.336195 4820 generic.go:334] "Generic (PLEG): container finished" podID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerID="a7985c1e46addff2bf4510896c079d9be02b4a1acfa0993dfb445f66ebd5f38f" exitCode=0 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.341702 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.359764 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b81af4bd-d2af-4a26-8f4d-a3e612778607","Type":"ContainerDied","Data":"437b9754b509c1466ba129e34883f39fc42e43b2b7d6fb57366f35e57d0c3b25"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360096 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7b6747758b-gs56z"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360119 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-zb9jc"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360134 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-zb9jc"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360149 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c516-account-create-update-vrfb9" event={"ID":"67b282c5-1012-4188-bc31-b8e7e794bb77","Type":"ContainerStarted","Data":"0bf5947fd1441fc936e5ed5dfa7b04468b4ee6948a25b45d63c164f9452941fa"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360162 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4e9a-account-create-update-4996c"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360176 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cd19-account-create-update-77csv" event={"ID":"95200e0a-ca93-4303-80af-8b950ddc8746","Type":"ContainerStarted","Data":"57cf883f5a62845b5703775e5d378694a44bfaa0c7228605211777d063adb94a"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360186 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360197 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c8ba-account-create-update-4wwws"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360210 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c516-account-create-update-vrfb9"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360219 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360228 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360254 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bcvpx" event={"ID":"73b1b012-98c9-49cf-852d-a2ff95b746cf","Type":"ContainerDied","Data":"134fddb7ed13f71efcb8a67bce858e36224f138e4b68654fc6cd13c721b456f5"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360269 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67dd4454fc-lr4lq" event={"ID":"61de836b-112e-4002-80c7-5ab77d4b9069","Type":"ContainerStarted","Data":"50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360288 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67dd4454fc-lr4lq" event={"ID":"61de836b-112e-4002-80c7-5ab77d4b9069","Type":"ContainerStarted","Data":"d5ed25326b5133c99c08fd6d1fe6d320a4913920be2b2b8d47571a1f05ab484f"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360297 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-p2v97"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360308 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-p2v97"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360321 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a80b-account-create-update-w6rwf"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360333 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cffb45b79-w6bp8" event={"ID":"9235cff6-e0e8-471a-9377-26dfcfd84dac","Type":"ContainerDied","Data":"974657f758f342af6918d1323b07f9c2cdb0b997d3d6058cb1ab6f19ab1ef80b"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360343 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cffb45b79-w6bp8" event={"ID":"9235cff6-e0e8-471a-9377-26dfcfd84dac","Type":"ContainerDied","Data":"a7985c1e46addff2bf4510896c079d9be02b4a1acfa0993dfb445f66ebd5f38f"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.377706 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8ba-account-create-update-4wwws" event={"ID":"0fa0449e-f842-4605-b814-1e7ede08a5b7","Type":"ContainerStarted","Data":"7863f6fcecb57bf0d8f98b9a21144496e336d41b9b7a80cb88f8e4fa54e39a4d"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.393448 4820 generic.go:334] "Generic (PLEG): container finished" podID="7b1db760-d9fc-477f-bc0b-8119d247253b" containerID="24941eaa5fcba668b44518933915d73aa568096044e3c4ed1b1d3b36fe63bafd" exitCode=0 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.393523 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b1db760-d9fc-477f-bc0b-8119d247253b","Type":"ContainerDied","Data":"24941eaa5fcba668b44518933915d73aa568096044e3c4ed1b1d3b36fe63bafd"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.393552 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b1db760-d9fc-477f-bc0b-8119d247253b","Type":"ContainerDied","Data":"6bf41331dc2f0220a2dc121fea5728deaea6c17ccff16b3eb94fe490cf7810ff"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.393563 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf41331dc2f0220a2dc121fea5728deaea6c17ccff16b3eb94fe490cf7810ff" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.393737 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.406146 4820 generic.go:334] "Generic (PLEG): container finished" podID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerID="3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677" exitCode=0 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.406249 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e533e163-2ccc-4468-9083-c9bf711b0dfb","Type":"ContainerDied","Data":"3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.406277 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e533e163-2ccc-4468-9083-c9bf711b0dfb","Type":"ContainerDied","Data":"26cd1076cc63a3c9ca70f42c100523437bd60b14673a32f0d582762b2e741f8a"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.406342 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.429425 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.429769 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-central-agent" containerID="cri-o://28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6" gracePeriod=30 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.429939 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="proxy-httpd" containerID="cri-o://eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d" gracePeriod=30 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.429988 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="sg-core" containerID="cri-o://c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208" gracePeriod=30 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.430028 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-notification-agent" containerID="cri-o://e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3" gracePeriod=30 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.436833 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a80b-account-create-update-w6rwf" event={"ID":"ed145514-af37-491d-bc62-2f84273b4fd0","Type":"ContainerStarted","Data":"234e2a51c95b60e8bddead8141fd036173f79d8091f7c813c28f1e6875ceb592"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.484332 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.484643 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9eb570ff-2a5e-4913-a84f-346579eaa104" containerName="kube-state-metrics" containerID="cri-o://4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632" gracePeriod=30 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.488301 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-config-data\") pod \"7b1db760-d9fc-477f-bc0b-8119d247253b\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.488347 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d9n5\" (UniqueName: \"kubernetes.io/projected/7b1db760-d9fc-477f-bc0b-8119d247253b-kube-api-access-9d9n5\") pod \"7b1db760-d9fc-477f-bc0b-8119d247253b\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.488393 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-nova-novncproxy-tls-certs\") pod \"7b1db760-d9fc-477f-bc0b-8119d247253b\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.488486 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-vencrypt-tls-certs\") pod \"7b1db760-d9fc-477f-bc0b-8119d247253b\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.488534 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-combined-ca-bundle\") pod \"7b1db760-d9fc-477f-bc0b-8119d247253b\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.507210 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e9a-account-create-update-4996c" event={"ID":"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05","Type":"ContainerStarted","Data":"a87feecffdb740c28defaad0723df39d6ec5a1e99d858b490aafa6edf23d56e8"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.526058 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1db760-d9fc-477f-bc0b-8119d247253b-kube-api-access-9d9n5" (OuterVolumeSpecName: "kube-api-access-9d9n5") pod "7b1db760-d9fc-477f-bc0b-8119d247253b" (UID: "7b1db760-d9fc-477f-bc0b-8119d247253b"). InnerVolumeSpecName "kube-api-access-9d9n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.529326 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" event={"ID":"5916b629-5e69-4ad3-9180-c07181d3ff37","Type":"ContainerStarted","Data":"86fa03fcf82765a136a3aab82794955988ac327e55c1a34182d75c4632f7c8fc"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.550414 4820 scope.go:117] "RemoveContainer" containerID="9b2390a7c05e56db19bda74dfb3d9d4dd876051e208b624fc3be25ba34452030" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.582342 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.585537 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.590708 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d9n5\" (UniqueName: \"kubernetes.io/projected/7b1db760-d9fc-477f-bc0b-8119d247253b-kube-api-access-9d9n5\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.608293 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.170:8776/healthcheck\": read tcp 10.217.0.2:40096->10.217.0.170:8776: read: connection reset by peer" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.665364 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b1db760-d9fc-477f-bc0b-8119d247253b" (UID: "7b1db760-d9fc-477f-bc0b-8119d247253b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.701517 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.901640 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-config-data" (OuterVolumeSpecName: "config-data") pod "7b1db760-d9fc-477f-bc0b-8119d247253b" (UID: "7b1db760-d9fc-477f-bc0b-8119d247253b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.908771 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.919262 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "7b1db760-d9fc-477f-bc0b-8119d247253b" (UID: "7b1db760-d9fc-477f-bc0b-8119d247253b"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.932604 4820 scope.go:117] "RemoveContainer" containerID="e5bf8c6230a3cf28cb4d6810d400ab586125f96f1e1d8e1e052c5ad5a57074e9" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.011092 4820 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.037626 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "7b1db760-d9fc-477f-bc0b-8119d247253b" (UID: "7b1db760-d9fc-477f-bc0b-8119d247253b"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.113935 4820 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.394928 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.401497 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b298-account-create-update-wh2wv"] Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.420991 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.421047 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data podName:8b1242f9-d2ac-493c-bc89-43f7be597a75 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:05.421030822 +0000 UTC m=+1380.454115020 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data") pod "rabbitmq-server-0" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75") : configmap "rabbitmq-config-data" not found Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.437889 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b298-account-create-update-wh2wv"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.443983 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.446889 4820 scope.go:117] "RemoveContainer" containerID="909cf351ee5d3a426633b14e5a872b68e1e1f2b2e35b195ce445cb68523c8342" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.471065 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.471590 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="4f99a57a-608b-4678-9be5-abc4347c8bcb" containerName="memcached" containerID="cri-o://a01c8152614e99c3561bbc5b953c4aa156aeb30d7be0dbf08d11fcbf1dfa7fff" gracePeriod=30 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.479401 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.483426 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489322 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b298-account-create-update-hxmxb"] Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489802 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="ovsdbserver-sb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489816 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="ovsdbserver-sb" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489832 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerName="init" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489838 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerName="init" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489852 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d07086-c2e8-4351-bac8-b99c485826c4" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489858 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d07086-c2e8-4351-bac8-b99c485826c4" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489870 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="cinder-scheduler" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489878 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="cinder-scheduler" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489887 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1db760-d9fc-477f-bc0b-8119d247253b" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489893 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1db760-d9fc-477f-bc0b-8119d247253b" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489906 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489911 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489922 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerName="mysql-bootstrap" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489927 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerName="mysql-bootstrap" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489936 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489943 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489954 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerName="dnsmasq-dns" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489959 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerName="dnsmasq-dns" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489971 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-server" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489977 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-server" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489987 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerName="galera" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489993 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerName="galera" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.490001 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="ovsdbserver-nb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490006 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="ovsdbserver-nb" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.490018 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="probe" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490023 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="probe" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.490033 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-httpd" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490039 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-httpd" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490195 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="96d07086-c2e8-4351-bac8-b99c485826c4" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490209 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490220 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerName="dnsmasq-dns" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490231 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="cinder-scheduler" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490259 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b1db760-d9fc-477f-bc0b-8119d247253b" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490269 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-server" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490280 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="ovsdbserver-nb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490290 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="probe" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490302 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerName="galera" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490311 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-httpd" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490322 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="ovsdbserver-sb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490351 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490969 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.494825 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.507141 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.521912 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-operator-scripts\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.522014 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-kolla-config\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.522097 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-combined-ca-bundle\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.522135 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnbv2\" (UniqueName: \"kubernetes.io/projected/b81af4bd-d2af-4a26-8f4d-a3e612778607-kube-api-access-cnbv2\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.522178 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-generated\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.522230 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.522291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-default\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.522572 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-galera-tls-certs\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.523253 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.534729 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b298-account-create-update-hxmxb"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.535505 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.536208 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.536257 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.536262 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.539281 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.545637 4820 scope.go:117] "RemoveContainer" containerID="275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.559765 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-68q2w"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.562925 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b81af4bd-d2af-4a26-8f4d-a3e612778607-kube-api-access-cnbv2" (OuterVolumeSpecName: "kube-api-access-cnbv2") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "kube-api-access-cnbv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.572298 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-68q2w"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.617031 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.629797 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.641405 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmtfw\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-kube-api-access-gmtfw\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.642427 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b282c5-1012-4188-bc31-b8e7e794bb77-operator-scripts\") pod \"67b282c5-1012-4188-bc31-b8e7e794bb77\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.642517 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76nqx\" (UniqueName: \"kubernetes.io/projected/ed145514-af37-491d-bc62-2f84273b4fd0-kube-api-access-76nqx\") pod \"ed145514-af37-491d-bc62-2f84273b4fd0\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.642628 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-run-httpd\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.642682 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed145514-af37-491d-bc62-2f84273b4fd0-operator-scripts\") pod \"ed145514-af37-491d-bc62-2f84273b4fd0\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.642733 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js7bk\" (UniqueName: \"kubernetes.io/projected/0fa0449e-f842-4605-b814-1e7ede08a5b7-kube-api-access-js7bk\") pod \"0fa0449e-f842-4605-b814-1e7ede08a5b7\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643446 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95200e0a-ca93-4303-80af-8b950ddc8746-operator-scripts\") pod \"95200e0a-ca93-4303-80af-8b950ddc8746\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643504 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-config-data\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643593 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-etc-swift\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643680 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-combined-ca-bundle\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643724 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgms7\" (UniqueName: \"kubernetes.io/projected/95200e0a-ca93-4303-80af-8b950ddc8746-kube-api-access-lgms7\") pod \"95200e0a-ca93-4303-80af-8b950ddc8746\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643771 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-public-tls-certs\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643808 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-log-httpd\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643832 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fa0449e-f842-4605-b814-1e7ede08a5b7-operator-scripts\") pod \"0fa0449e-f842-4605-b814-1e7ede08a5b7\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643903 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66b7c\" (UniqueName: \"kubernetes.io/projected/67b282c5-1012-4188-bc31-b8e7e794bb77-kube-api-access-66b7c\") pod \"67b282c5-1012-4188-bc31-b8e7e794bb77\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643945 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-internal-tls-certs\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.644526 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.644645 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7t2x\" (UniqueName: \"kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.644784 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.645156 4820 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.645203 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnbv2\" (UniqueName: \"kubernetes.io/projected/b81af4bd-d2af-4a26-8f4d-a3e612778607-kube-api-access-cnbv2\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.645218 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.645332 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67b282c5-1012-4188-bc31-b8e7e794bb77-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67b282c5-1012-4188-bc31-b8e7e794bb77" (UID: "67b282c5-1012-4188-bc31-b8e7e794bb77"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.645923 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed145514-af37-491d-bc62-2f84273b4fd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed145514-af37-491d-bc62-2f84273b4fd0" (UID: "ed145514-af37-491d-bc62-2f84273b4fd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.647232 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.647273 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.654551 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.657421 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95200e0a-ca93-4303-80af-8b950ddc8746-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95200e0a-ca93-4303-80af-8b950ddc8746" (UID: "95200e0a-ca93-4303-80af-8b950ddc8746"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.664088 4820 scope.go:117] "RemoveContainer" containerID="3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.664779 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fa0449e-f842-4605-b814-1e7ede08a5b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0fa0449e-f842-4605-b814-1e7ede08a5b7" (UID: "0fa0449e-f842-4605-b814-1e7ede08a5b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.665052 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-s76l5"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.665103 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" event={"ID":"5916b629-5e69-4ad3-9180-c07181d3ff37","Type":"ContainerStarted","Data":"5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1"} Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.668483 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.669629 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.669888 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-kube-api-access-gmtfw" (OuterVolumeSpecName: "kube-api-access-gmtfw") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "kube-api-access-gmtfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.670827 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a80b-account-create-update-w6rwf" event={"ID":"ed145514-af37-491d-bc62-2f84273b4fd0","Type":"ContainerDied","Data":"234e2a51c95b60e8bddead8141fd036173f79d8091f7c813c28f1e6875ceb592"} Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.670931 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.676216 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-s76l5"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.688603 4820 generic.go:334] "Generic (PLEG): container finished" podID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerID="eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d" exitCode=0 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.688628 4820 generic.go:334] "Generic (PLEG): container finished" podID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerID="c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208" exitCode=2 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.688637 4820 generic.go:334] "Generic (PLEG): container finished" podID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerID="28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6" exitCode=0 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.688688 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerDied","Data":"eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d"} Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.688716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerDied","Data":"c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208"} Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.688728 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerDied","Data":"28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6"} Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.690213 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8ba-account-create-update-4wwws" event={"ID":"0fa0449e-f842-4605-b814-1e7ede08a5b7","Type":"ContainerDied","Data":"7863f6fcecb57bf0d8f98b9a21144496e336d41b9b7a80cb88f8e4fa54e39a4d"} Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.690304 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.693190 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-665c5b9dff-g2t96"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.693495 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-665c5b9dff-g2t96" podUID="16ebfdb2-72a8-40c6-b0ed-012f138025b2" containerName="keystone-api" containerID="cri-o://200807455a2947c5b934674313e4af887e6f6944441305fbe4c73423e4c5c754" gracePeriod=30 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.711334 4820 generic.go:334] "Generic (PLEG): container finished" podID="9eb570ff-2a5e-4913-a84f-346579eaa104" containerID="4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632" exitCode=2 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.711486 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.730192 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed145514-af37-491d-bc62-2f84273b4fd0-kube-api-access-76nqx" (OuterVolumeSpecName: "kube-api-access-76nqx") pod "ed145514-af37-491d-bc62-2f84273b4fd0" (UID: "ed145514-af37-491d-bc62-2f84273b4fd0"). InnerVolumeSpecName "kube-api-access-76nqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.734657 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.734832 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa0449e-f842-4605-b814-1e7ede08a5b7-kube-api-access-js7bk" (OuterVolumeSpecName: "kube-api-access-js7bk") pod "0fa0449e-f842-4605-b814-1e7ede08a5b7" (UID: "0fa0449e-f842-4605-b814-1e7ede08a5b7"). InnerVolumeSpecName "kube-api-access-js7bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.737589 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95200e0a-ca93-4303-80af-8b950ddc8746-kube-api-access-lgms7" (OuterVolumeSpecName: "kube-api-access-lgms7") pod "95200e0a-ca93-4303-80af-8b950ddc8746" (UID: "95200e0a-ca93-4303-80af-8b950ddc8746"). InnerVolumeSpecName "kube-api-access-lgms7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.737657 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67b282c5-1012-4188-bc31-b8e7e794bb77-kube-api-access-66b7c" (OuterVolumeSpecName: "kube-api-access-66b7c") pod "67b282c5-1012-4188-bc31-b8e7e794bb77" (UID: "67b282c5-1012-4188-bc31-b8e7e794bb77"). InnerVolumeSpecName "kube-api-access-66b7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.749303 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-certs\") pod \"9eb570ff-2a5e-4913-a84f-346579eaa104\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.750487 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-config\") pod \"9eb570ff-2a5e-4913-a84f-346579eaa104\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.750564 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq6fc\" (UniqueName: \"kubernetes.io/projected/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-api-access-cq6fc\") pod \"9eb570ff-2a5e-4913-a84f-346579eaa104\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.750627 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-operator-scripts\") pod \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.750765 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-combined-ca-bundle\") pod \"9eb570ff-2a5e-4913-a84f-346579eaa104\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.750895 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlsm9\" (UniqueName: \"kubernetes.io/projected/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-kube-api-access-wlsm9\") pod \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.751915 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752012 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7t2x\" (UniqueName: \"kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752195 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed145514-af37-491d-bc62-2f84273b4fd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752219 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js7bk\" (UniqueName: \"kubernetes.io/projected/0fa0449e-f842-4605-b814-1e7ede08a5b7-kube-api-access-js7bk\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752238 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95200e0a-ca93-4303-80af-8b950ddc8746-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752275 4820 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752289 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgms7\" (UniqueName: \"kubernetes.io/projected/95200e0a-ca93-4303-80af-8b950ddc8746-kube-api-access-lgms7\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752300 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752317 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fa0449e-f842-4605-b814-1e7ede08a5b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752328 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66b7c\" (UniqueName: \"kubernetes.io/projected/67b282c5-1012-4188-bc31-b8e7e794bb77-kube-api-access-66b7c\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752340 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmtfw\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-kube-api-access-gmtfw\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752362 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b282c5-1012-4188-bc31-b8e7e794bb77-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752377 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76nqx\" (UniqueName: \"kubernetes.io/projected/ed145514-af37-491d-bc62-2f84273b4fd0-kube-api-access-76nqx\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.769097 4820 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.769192 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts podName:5869267a-13d5-4879-a3b0-d0e12ee57b8c nodeName:}" failed. No retries permitted until 2026-02-21 07:10:02.269161799 +0000 UTC m=+1377.302245997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts") pod "keystone-b298-account-create-update-hxmxb" (UID: "5869267a-13d5-4879-a3b0-d0e12ee57b8c") : configmap "openstack-scripts" not found Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.769959 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fbdfb60-d58f-4949-a33c-f17e9ea2cd05" (UID: "6fbdfb60-d58f-4949-a33c-f17e9ea2cd05"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.783599 4820 projected.go:194] Error preparing data for projected volume kube-api-access-p7t2x for pod openstack/keystone-b298-account-create-update-hxmxb: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.783668 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x podName:5869267a-13d5-4879-a3b0-d0e12ee57b8c nodeName:}" failed. No retries permitted until 2026-02-21 07:10:02.283643024 +0000 UTC m=+1377.316727222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p7t2x" (UniqueName: "kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x") pod "keystone-b298-account-create-update-hxmxb" (UID: "5869267a-13d5-4879-a3b0-d0e12ee57b8c") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.818430 4820 scope.go:117] "RemoveContainer" containerID="275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.819577 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074\": container with ID starting with 275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074 not found: ID does not exist" containerID="275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.819616 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074"} err="failed to get container status \"275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074\": rpc error: code = NotFound desc = could not find container \"275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074\": container with ID starting with 275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074 not found: ID does not exist" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.819660 4820 scope.go:117] "RemoveContainer" containerID="3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.831072 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-kube-api-access-wlsm9" (OuterVolumeSpecName: "kube-api-access-wlsm9") pod "6fbdfb60-d58f-4949-a33c-f17e9ea2cd05" (UID: "6fbdfb60-d58f-4949-a33c-f17e9ea2cd05"). InnerVolumeSpecName "kube-api-access-wlsm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.833589 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677\": container with ID starting with 3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677 not found: ID does not exist" containerID="3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.833638 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677"} err="failed to get container status \"3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677\": rpc error: code = NotFound desc = could not find container \"3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677\": container with ID starting with 3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677 not found: ID does not exist" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.835292 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" path="/var/lib/kubelet/pods/455bfe0a-a135-4900-8b15-ce584dc8a5bb/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.837344 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96d07086-c2e8-4351-bac8-b99c485826c4" path="/var/lib/kubelet/pods/96d07086-c2e8-4351-bac8-b99c485826c4/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.838647 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9866838-084f-4340-b72d-5dba3461661e" path="/var/lib/kubelet/pods/a9866838-084f-4340-b72d-5dba3461661e/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.839707 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d781b010-be2e-465d-9789-d6188ac5a30e" path="/var/lib/kubelet/pods/d781b010-be2e-465d-9789-d6188ac5a30e/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.840439 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7d6374d-1595-4586-b161-d199a2b39068" path="/var/lib/kubelet/pods/d7d6374d-1595-4586-b161-d199a2b39068/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.840912 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc228462-9ac8-475c-859b-bbce5678a5ea" path="/var/lib/kubelet/pods/dc228462-9ac8-475c-859b-bbce5678a5ea/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.843829 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" path="/var/lib/kubelet/pods/e533e163-2ccc-4468-9083-c9bf711b0dfb/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.843990 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-api-access-cq6fc" (OuterVolumeSpecName: "kube-api-access-cq6fc") pod "9eb570ff-2a5e-4913-a84f-346579eaa104" (UID: "9eb570ff-2a5e-4913-a84f-346579eaa104"). InnerVolumeSpecName "kube-api-access-cq6fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.845163 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26b24bc-e904-49a1-b2bc-d140b0032b83" path="/var/lib/kubelet/pods/f26b24bc-e904-49a1-b2bc-d140b0032b83/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.847860 4820 generic.go:334] "Generic (PLEG): container finished" podID="8c841249-7293-4826-b05f-e4a189aaef07" containerID="498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae" exitCode=0 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.856219 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq6fc\" (UniqueName: \"kubernetes.io/projected/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-api-access-cq6fc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.856248 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.856268 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlsm9\" (UniqueName: \"kubernetes.io/projected/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-kube-api-access-wlsm9\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.882862 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-67dd4454fc-lr4lq" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker-log" containerID="cri-o://50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35" gracePeriod=30 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.883446 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-67dd4454fc-lr4lq" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker" containerID="cri-o://0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d" gracePeriod=30 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.887646 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.906622 4820 generic.go:334] "Generic (PLEG): container finished" podID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerID="c6eec58d937060e917865b55d6939557fd730b3dc3294db9f26e433da11bcf3a" exitCode=0 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.914255 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.917869 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerID="2888304fe149a4652cef0ecaece438bfd7d58f18a6fbf5e65f2e3c959991183b" exitCode=0 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.923685 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.928156 4820 generic.go:334] "Generic (PLEG): container finished" podID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerID="0c7af27d09ebb00239341b37c16edf7677edec982563c281c9fa2b1e765704e3" exitCode=0 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.938864 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.953365 4820 generic.go:334] "Generic (PLEG): container finished" podID="899bd84b-c67f-4a89-9f92-a68094530566" containerID="765217377e07f3bfb154c1825d8e9aa8ce15d008d63d260388c182a058e66b3c" exitCode=0 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.955532 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.964416 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.006516 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-67dd4454fc-lr4lq" podStartSLOduration=7.006496344 podStartE2EDuration="7.006496344s" podCreationTimestamp="2026-02-21 07:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:10:01.905772556 +0000 UTC m=+1376.938856754" watchObservedRunningTime="2026-02-21 07:10:02.006496344 +0000 UTC m=+1377.039580542" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.016145 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": dial tcp 10.217.0.206:8775: connect: connection refused" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.016277 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": dial tcp 10.217.0.206:8775: connect: connection refused" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.163221 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "9eb570ff-2a5e-4913-a84f-346579eaa104" (UID: "9eb570ff-2a5e-4913-a84f-346579eaa104"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.177570 4820 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.189873 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.206329 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eb570ff-2a5e-4913-a84f-346579eaa104" (UID: "9eb570ff-2a5e-4913-a84f-346579eaa104"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.217501 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.284169 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.284278 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7t2x\" (UniqueName: \"kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.284401 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.284421 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.284432 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.284831 4820 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.284882 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts podName:5869267a-13d5-4879-a3b0-d0e12ee57b8c nodeName:}" failed. No retries permitted until 2026-02-21 07:10:03.284865838 +0000 UTC m=+1378.317950036 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts") pod "keystone-b298-account-create-update-hxmxb" (UID: "5869267a-13d5-4879-a3b0-d0e12ee57b8c") : configmap "openstack-scripts" not found Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.308445 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.308682 4820 projected.go:194] Error preparing data for projected volume kube-api-access-p7t2x for pod openstack/keystone-b298-account-create-update-hxmxb: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.308730 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x podName:5869267a-13d5-4879-a3b0-d0e12ee57b8c nodeName:}" failed. No retries permitted until 2026-02-21 07:10:03.308711289 +0000 UTC m=+1378.341795487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7t2x" (UniqueName: "kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x") pod "keystone-b298-account-create-update-hxmxb" (UID: "5869267a-13d5-4879-a3b0-d0e12ee57b8c") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.322419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-config-data" (OuterVolumeSpecName: "config-data") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.337771 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76b79c9766-s694g" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.163:9311/healthcheck\": dial tcp 10.217.0.163:9311: connect: connection refused" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.339456 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76b79c9766-s694g" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.163:9311/healthcheck\": dial tcp 10.217.0.163:9311: connect: connection refused" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.349549 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.349668 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.385881 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.385919 4820 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.385929 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.385938 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.403039 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "9eb570ff-2a5e-4913-a84f-346579eaa104" (UID: "9eb570ff-2a5e-4913-a84f-346579eaa104"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.412081 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.486912 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.486960 4820 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.782363 4820 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.086s" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.782409 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9eb570ff-2a5e-4913-a84f-346579eaa104","Type":"ContainerDied","Data":"4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.782440 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9eb570ff-2a5e-4913-a84f-346579eaa104","Type":"ContainerDied","Data":"71365a9e22568ef1b7939e8176b425016fd726c9f3eda1b1728111b2c07781f8"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.782463 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.782539 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b298-account-create-update-hxmxb"] Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.782560 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8c841249-7293-4826-b05f-e4a189aaef07","Type":"ContainerDied","Data":"498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.782905 4820 scope.go:117] "RemoveContainer" containerID="4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.782994 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6cfkd"] Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783020 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6cfkd"] Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783037 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bcvpx"] Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783054 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67dd4454fc-lr4lq" event={"ID":"61de836b-112e-4002-80c7-5ab77d4b9069","Type":"ContainerStarted","Data":"0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783073 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e9a-account-create-update-4996c" event={"ID":"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05","Type":"ContainerDied","Data":"a87feecffdb740c28defaad0723df39d6ec5a1e99d858b490aafa6edf23d56e8"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783085 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff","Type":"ContainerDied","Data":"c6eec58d937060e917865b55d6939557fd730b3dc3294db9f26e433da11bcf3a"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783098 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cd19-account-create-update-77csv" event={"ID":"95200e0a-ca93-4303-80af-8b950ddc8746","Type":"ContainerDied","Data":"57cf883f5a62845b5703775e5d378694a44bfaa0c7228605211777d063adb94a"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783113 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85cb846b98-bwgbn" event={"ID":"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe","Type":"ContainerDied","Data":"2888304fe149a4652cef0ecaece438bfd7d58f18a6fbf5e65f2e3c959991183b"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783128 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cffb45b79-w6bp8" event={"ID":"9235cff6-e0e8-471a-9377-26dfcfd84dac","Type":"ContainerDied","Data":"799aa64333911f7111f98ffff76ee1c66aebdf83eeaa6dc6c45e5389c74e915a"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783140 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3827c2-ee55-4f86-a752-d7cbc9c6454e","Type":"ContainerDied","Data":"0c7af27d09ebb00239341b37c16edf7677edec982563c281c9fa2b1e765704e3"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783155 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b81af4bd-d2af-4a26-8f4d-a3e612778607","Type":"ContainerDied","Data":"11ff38cd3a84b9695da2170ae34b744fdcf1335c31df7ea094d308bb6b4a401a"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783166 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899bd84b-c67f-4a89-9f92-a68094530566","Type":"ContainerDied","Data":"765217377e07f3bfb154c1825d8e9aa8ce15d008d63d260388c182a058e66b3c"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783176 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c516-account-create-update-vrfb9" event={"ID":"67b282c5-1012-4188-bc31-b8e7e794bb77","Type":"ContainerDied","Data":"0bf5947fd1441fc936e5ed5dfa7b04468b4ee6948a25b45d63c164f9452941fa"} Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.783078 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-p7t2x operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-b298-account-create-update-hxmxb" podUID="5869267a-13d5-4879-a3b0-d0e12ee57b8c" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.813906 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sfpp9" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" probeResult="failure" output="command timed out" Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.943601 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.944143 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.944799 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.944932 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.947768 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.947875 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.947909 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.949805 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.949848 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.970835 4820 scope.go:117] "RemoveContainer" containerID="4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632" Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.971076 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632\": container with ID starting with 4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632 not found: ID does not exist" containerID="4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.971108 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632"} err="failed to get container status \"4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632\": rpc error: code = NotFound desc = could not find container \"4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632\": container with ID starting with 4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632 not found: ID does not exist" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.971132 4820 scope.go:117] "RemoveContainer" containerID="974657f758f342af6918d1323b07f9c2cdb0b997d3d6058cb1ab6f19ab1ef80b" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.977133 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.978866 4820 generic.go:334] "Generic (PLEG): container finished" podID="61de836b-112e-4002-80c7-5ab77d4b9069" containerID="50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35" exitCode=143 Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.978953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67dd4454fc-lr4lq" event={"ID":"61de836b-112e-4002-80c7-5ab77d4b9069","Type":"ContainerDied","Data":"50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.981534 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85cb846b98-bwgbn" event={"ID":"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe","Type":"ContainerDied","Data":"d03e1e7ca4ed1e17b146f874e8b7c512ef17ac2c552ca38fe854b5ace6b4ef08"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.981578 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d03e1e7ca4ed1e17b146f874e8b7c512ef17ac2c552ca38fe854b5ace6b4ef08" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.005484 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sfpp9" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" probeResult="failure" output=< Feb 21 07:10:03 crc kubenswrapper[4820]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Feb 21 07:10:03 crc kubenswrapper[4820]: > Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.005740 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8c841249-7293-4826-b05f-e4a189aaef07","Type":"ContainerDied","Data":"b5d7777c4805cb6f20d3b114fa2f8d4c4b48ab9ca066a18749eb9c88daef742c"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.005781 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5d7777c4805cb6f20d3b114fa2f8d4c4b48ab9ca066a18749eb9c88daef742c" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010294 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dzx8\" (UniqueName: \"kubernetes.io/projected/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-kube-api-access-4dzx8\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010332 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-scripts\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010377 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-combined-ca-bundle\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010411 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-internal-tls-certs\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010446 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rnp4\" (UniqueName: \"kubernetes.io/projected/899bd84b-c67f-4a89-9f92-a68094530566-kube-api-access-5rnp4\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010475 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010497 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899bd84b-c67f-4a89-9f92-a68094530566-etc-machine-id\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010538 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data-custom\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010560 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-httpd-run\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010582 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-public-tls-certs\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010609 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-combined-ca-bundle\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010630 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899bd84b-c67f-4a89-9f92-a68094530566-logs\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010645 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010675 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-config-data\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010698 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-scripts\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010722 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-logs\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010749 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-public-tls-certs\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.012516 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899bd84b-c67f-4a89-9f92-a68094530566-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.012786 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/899bd84b-c67f-4a89-9f92-a68094530566-logs" (OuterVolumeSpecName: "logs") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.013075 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.015477 4820 generic.go:334] "Generic (PLEG): container finished" podID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerID="21769d7e4b9a4ff09d20e68b3668dbde7c57ce716fc232f4365f9370127b9d52" exitCode=0 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.015571 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a112132d-4a29-460c-985d-b0ca2ddb1aa6","Type":"ContainerDied","Data":"21769d7e4b9a4ff09d20e68b3668dbde7c57ce716fc232f4365f9370127b9d52"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.015604 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a112132d-4a29-460c-985d-b0ca2ddb1aa6","Type":"ContainerDied","Data":"a5ce2f4d318a8be4343d1c00aa8f9b38475fee7ae1d50bf1b4be7e34360eab36"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.015616 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5ce2f4d318a8be4343d1c00aa8f9b38475fee7ae1d50bf1b4be7e34360eab36" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.016431 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-logs" (OuterVolumeSpecName: "logs") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.022569 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.023162 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.025063 4820 generic.go:334] "Generic (PLEG): container finished" podID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerID="df3a8b6f8128140f50c80025c22d3b291ab89d34796d0307384acb7c6dbbcc96" exitCode=0 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.025143 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" event={"ID":"6dbc8f44-c54c-42c0-8430-742c6bb61165","Type":"ContainerDied","Data":"df3a8b6f8128140f50c80025c22d3b291ab89d34796d0307384acb7c6dbbcc96"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.028274 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899bd84b-c67f-4a89-9f92-a68094530566","Type":"ContainerDied","Data":"5cb1b96062485be8b82f57585bda85bcd24b219427b4dff91edc9fb75a52f886"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.028488 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.033461 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899bd84b-c67f-4a89-9f92-a68094530566-kube-api-access-5rnp4" (OuterVolumeSpecName: "kube-api-access-5rnp4") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "kube-api-access-5rnp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.035201 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" event={"ID":"5916b629-5e69-4ad3-9180-c07181d3ff37","Type":"ContainerStarted","Data":"280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.035389 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener-log" containerID="cri-o://5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1" gracePeriod=30 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.035570 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener" containerID="cri-o://280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e" gracePeriod=30 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.037437 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-scripts" (OuterVolumeSpecName: "scripts") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.044808 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-kube-api-access-4dzx8" (OuterVolumeSpecName: "kube-api-access-4dzx8") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "kube-api-access-4dzx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.071738 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3827c2-ee55-4f86-a752-d7cbc9c6454e","Type":"ContainerDied","Data":"7dbefddbd7787a89f99dc670daea40f0d47cd75502d636a14167dff4a8fa59e9"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.071837 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.072119 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-scripts" (OuterVolumeSpecName: "scripts") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.076674 4820 generic.go:334] "Generic (PLEG): container finished" podID="4f99a57a-608b-4678-9be5-abc4347c8bcb" containerID="a01c8152614e99c3561bbc5b953c4aa156aeb30d7be0dbf08d11fcbf1dfa7fff" exitCode=0 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.076786 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f99a57a-608b-4678-9be5-abc4347c8bcb","Type":"ContainerDied","Data":"a01c8152614e99c3561bbc5b953c4aa156aeb30d7be0dbf08d11fcbf1dfa7fff"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.076817 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f99a57a-608b-4678-9be5-abc4347c8bcb","Type":"ContainerDied","Data":"49654605e076770c4b1f63011fc38c031abfbddaf42bcc3556d4899ef0c6f4eb"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.076830 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49654605e076770c4b1f63011fc38c031abfbddaf42bcc3556d4899ef0c6f4eb" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.086670 4820 generic.go:334] "Generic (PLEG): container finished" podID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerID="8e46bf988fca88e52a41735046800e6ec7c614c220632634b1037bebf8ce17a8" exitCode=1 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.086762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bcvpx" event={"ID":"73b1b012-98c9-49cf-852d-a2ff95b746cf","Type":"ContainerDied","Data":"8e46bf988fca88e52a41735046800e6ec7c614c220632634b1037bebf8ce17a8"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.087421 4820 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-bcvpx" secret="" err="secret \"galera-openstack-dockercfg-ldndf\" not found" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.087460 4820 scope.go:117] "RemoveContainer" containerID="8e46bf988fca88e52a41735046800e6ec7c614c220632634b1037bebf8ce17a8" Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.087833 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-bcvpx_openstack(73b1b012-98c9-49cf-852d-a2ff95b746cf)\"" pod="openstack/root-account-create-update-bcvpx" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.096868 4820 generic.go:334] "Generic (PLEG): container finished" podID="4709782f-54e7-4a78-a56e-8f58a5556501" containerID="84344b3d5ae53a06ac9828132a33cafdbcfdeafdabeded21cd72b5eb2ec97792" exitCode=0 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.096962 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b79c9766-s694g" event={"ID":"4709782f-54e7-4a78-a56e-8f58a5556501","Type":"ContainerDied","Data":"84344b3d5ae53a06ac9828132a33cafdbcfdeafdabeded21cd72b5eb2ec97792"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.096991 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b79c9766-s694g" event={"ID":"4709782f-54e7-4a78-a56e-8f58a5556501","Type":"ContainerDied","Data":"c6f2a45c53e61599400b77234d2708a02138c72044bb416b4b3506cba8df90b6"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.097005 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f2a45c53e61599400b77234d2708a02138c72044bb416b4b3506cba8df90b6" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.107646 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" podStartSLOduration=8.107618414 podStartE2EDuration="8.107618414s" podCreationTimestamp="2026-02-21 07:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:10:03.072803574 +0000 UTC m=+1378.105887782" watchObservedRunningTime="2026-02-21 07:10:03.107618414 +0000 UTC m=+1378.140702612" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116296 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rnp4\" (UniqueName: \"kubernetes.io/projected/899bd84b-c67f-4a89-9f92-a68094530566-kube-api-access-5rnp4\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116340 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116356 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899bd84b-c67f-4a89-9f92-a68094530566-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116370 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116424 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116436 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899bd84b-c67f-4a89-9f92-a68094530566-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116449 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116460 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116471 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dzx8\" (UniqueName: \"kubernetes.io/projected/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-kube-api-access-4dzx8\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116482 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.117670 4820 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.118064 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts podName:73b1b012-98c9-49cf-852d-a2ff95b746cf nodeName:}" failed. No retries permitted until 2026-02-21 07:10:03.618043018 +0000 UTC m=+1378.651127216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts") pod "root-account-create-update-bcvpx" (UID: "73b1b012-98c9-49cf-852d-a2ff95b746cf") : configmap "openstack-scripts" not found Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.142127 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff","Type":"ContainerDied","Data":"b772f97d4d573dc6a8384e377410403688e82c34f3155619a1ec77398b45ecb4"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.142181 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b772f97d4d573dc6a8384e377410403688e82c34f3155619a1ec77398b45ecb4" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.154725 4820 generic.go:334] "Generic (PLEG): container finished" podID="0ca75969-e299-435a-a607-d470d4ab831e" containerID="f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd" exitCode=0 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.154808 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0ca75969-e299-435a-a607-d470d4ab831e","Type":"ContainerDied","Data":"f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.157051 4820 generic.go:334] "Generic (PLEG): container finished" podID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerID="841b7a62d1e6b92cb6679a13f353ab7adf29630b1c91e4ad2d0c98c9562682d7" exitCode=0 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.157157 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.157284 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e16d52c-9322-49cf-9948-8d1c56c0a5ed","Type":"ContainerDied","Data":"841b7a62d1e6b92cb6679a13f353ab7adf29630b1c91e4ad2d0c98c9562682d7"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.157339 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e16d52c-9322-49cf-9948-8d1c56c0a5ed","Type":"ContainerDied","Data":"c3d5f579c82b6a6e56f3ced2d9af8224c99c222fa9ba6b80b60b56caad99b2a0"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.157357 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3d5f579c82b6a6e56f3ced2d9af8224c99c222fa9ba6b80b60b56caad99b2a0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.161622 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.195107 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerName="galera" containerID="cri-o://8ea9d572727a93891412c9eefb51f0b89a90a953470d2aea7e3c780c0bab4fc7" gracePeriod=30 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.206219 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.218716 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.218756 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.236723 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data" (OuterVolumeSpecName: "config-data") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.237333 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.240522 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.242750 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.255403 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.256484 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-config-data" (OuterVolumeSpecName: "config-data") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.298749 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.318630 4820 scope.go:117] "RemoveContainer" containerID="a7985c1e46addff2bf4510896c079d9be02b4a1acfa0993dfb445f66ebd5f38f" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.319557 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhdnv\" (UniqueName: \"kubernetes.io/projected/8c841249-7293-4826-b05f-e4a189aaef07-kube-api-access-jhdnv\") pod \"8c841249-7293-4826-b05f-e4a189aaef07\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.319590 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-combined-ca-bundle\") pod \"8c841249-7293-4826-b05f-e4a189aaef07\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.319667 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-config-data\") pod \"8c841249-7293-4826-b05f-e4a189aaef07\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.319939 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.319978 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7t2x\" (UniqueName: \"kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.320019 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.320028 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.320037 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.320045 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.320053 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.320063 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.324348 4820 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.324449 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts podName:5869267a-13d5-4879-a3b0-d0e12ee57b8c nodeName:}" failed. No retries permitted until 2026-02-21 07:10:05.324402969 +0000 UTC m=+1380.357487267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts") pod "keystone-b298-account-create-update-hxmxb" (UID: "5869267a-13d5-4879-a3b0-d0e12ee57b8c") : configmap "openstack-scripts" not found Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.325508 4820 projected.go:194] Error preparing data for projected volume kube-api-access-p7t2x for pod openstack/keystone-b298-account-create-update-hxmxb: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.325586 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x podName:5869267a-13d5-4879-a3b0-d0e12ee57b8c nodeName:}" failed. No retries permitted until 2026-02-21 07:10:05.325564479 +0000 UTC m=+1380.358648787 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7t2x" (UniqueName: "kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x") pod "keystone-b298-account-create-update-hxmxb" (UID: "5869267a-13d5-4879-a3b0-d0e12ee57b8c") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.335983 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.355939 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c841249-7293-4826-b05f-e4a189aaef07-kube-api-access-jhdnv" (OuterVolumeSpecName: "kube-api-access-jhdnv") pod "8c841249-7293-4826-b05f-e4a189aaef07" (UID: "8c841249-7293-4826-b05f-e4a189aaef07"). InnerVolumeSpecName "kube-api-access-jhdnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.366412 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c841249-7293-4826-b05f-e4a189aaef07" (UID: "8c841249-7293-4826-b05f-e4a189aaef07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.369789 4820 scope.go:117] "RemoveContainer" containerID="437b9754b509c1466ba129e34883f39fc42e43b2b7d6fb57366f35e57d0c3b25" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.383037 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.398953 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.400768 4820 scope.go:117] "RemoveContainer" containerID="4841d214c6aeccf3e3adc2843ea15574251aca74a386c5d68c07feac2783f7c1" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.426571 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.428356 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhdnv\" (UniqueName: \"kubernetes.io/projected/8c841249-7293-4826-b05f-e4a189aaef07-kube-api-access-jhdnv\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.428372 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.456983 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-config-data" (OuterVolumeSpecName: "config-data") pod "8c841249-7293-4826-b05f-e4a189aaef07" (UID: "8c841249-7293-4826-b05f-e4a189aaef07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.461168 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.479762 4820 scope.go:117] "RemoveContainer" containerID="765217377e07f3bfb154c1825d8e9aa8ce15d008d63d260388c182a058e66b3c" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.487860 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.514623 4820 scope.go:117] "RemoveContainer" containerID="9d5edce8d453916f71c03d27dbadd27156155685e8222590f97716c227514067" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.516636 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.527038 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530550 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-scripts\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530595 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-logs\") pod \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530630 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbqb8\" (UniqueName: \"kubernetes.io/projected/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-kube-api-access-tbqb8\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530656 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-combined-ca-bundle\") pod \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530678 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-config-data\") pod \"4f99a57a-608b-4678-9be5-abc4347c8bcb\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530816 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-logs\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530836 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-combined-ca-bundle\") pod \"4f99a57a-608b-4678-9be5-abc4347c8bcb\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530857 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnmpt\" (UniqueName: \"kubernetes.io/projected/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-kube-api-access-gnmpt\") pod \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530883 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-combined-ca-bundle\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.532505 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-logs" (OuterVolumeSpecName: "logs") pod "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" (UID: "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.533591 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-logs" (OuterVolumeSpecName: "logs") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.534309 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-config-data" (OuterVolumeSpecName: "config-data") pod "4f99a57a-608b-4678-9be5-abc4347c8bcb" (UID: "4f99a57a-608b-4678-9be5-abc4347c8bcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.548123 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-scripts" (OuterVolumeSpecName: "scripts") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.550866 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-kube-api-access-tbqb8" (OuterVolumeSpecName: "kube-api-access-tbqb8") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "kube-api-access-tbqb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.550968 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-kube-api-access-gnmpt" (OuterVolumeSpecName: "kube-api-access-gnmpt") pod "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" (UID: "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe"). InnerVolumeSpecName "kube-api-access-gnmpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564552 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-config-data\") pod \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564604 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-kolla-config\") pod \"4f99a57a-608b-4678-9be5-abc4347c8bcb\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564733 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-scripts\") pod \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564768 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-internal-tls-certs\") pod \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564795 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-config-data\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564828 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-memcached-tls-certs\") pod \"4f99a57a-608b-4678-9be5-abc4347c8bcb\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564883 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-public-tls-certs\") pod \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564940 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-internal-tls-certs\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564967 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcgq2\" (UniqueName: \"kubernetes.io/projected/4f99a57a-608b-4678-9be5-abc4347c8bcb-kube-api-access-mcgq2\") pod \"4f99a57a-608b-4678-9be5-abc4347c8bcb\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.565013 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.565032 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-httpd-run\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.567204 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.567546 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.569583 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "4f99a57a-608b-4678-9be5-abc4347c8bcb" (UID: "4f99a57a-608b-4678-9be5-abc4347c8bcb"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.570140 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-cffb45b79-w6bp8"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.570894 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.571839 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.573352 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.573389 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbqb8\" (UniqueName: \"kubernetes.io/projected/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-kube-api-access-tbqb8\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.573969 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.574005 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.574060 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnmpt\" (UniqueName: \"kubernetes.io/projected/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-kube-api-access-gnmpt\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.580747 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-scripts" (OuterVolumeSpecName: "scripts") pod "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" (UID: "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.588655 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.589325 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.603567 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.605956 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f99a57a-608b-4678-9be5-abc4347c8bcb-kube-api-access-mcgq2" (OuterVolumeSpecName: "kube-api-access-mcgq2") pod "4f99a57a-608b-4678-9be5-abc4347c8bcb" (UID: "4f99a57a-608b-4678-9be5-abc4347c8bcb"). InnerVolumeSpecName "kube-api-access-mcgq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.612332 4820 scope.go:117] "RemoveContainer" containerID="0c7af27d09ebb00239341b37c16edf7677edec982563c281c9fa2b1e765704e3" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.633061 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-cffb45b79-w6bp8"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.638393 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.644075 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.659484 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674684 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-combined-ca-bundle\") pod \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674728 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmf6b\" (UniqueName: \"kubernetes.io/projected/4709782f-54e7-4a78-a56e-8f58a5556501-kube-api-access-xmf6b\") pod \"4709782f-54e7-4a78-a56e-8f58a5556501\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674762 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4709782f-54e7-4a78-a56e-8f58a5556501-logs\") pod \"4709782f-54e7-4a78-a56e-8f58a5556501\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674795 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-config-data\") pod \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674883 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data-custom\") pod \"4709782f-54e7-4a78-a56e-8f58a5556501\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674932 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-public-tls-certs\") pod \"4709782f-54e7-4a78-a56e-8f58a5556501\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674954 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data\") pod \"4709782f-54e7-4a78-a56e-8f58a5556501\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674971 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-combined-ca-bundle\") pod \"4709782f-54e7-4a78-a56e-8f58a5556501\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674989 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-logs\") pod \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675156 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-combined-ca-bundle\") pod \"0ca75969-e299-435a-a607-d470d4ab831e\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675183 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-internal-tls-certs\") pod \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675217 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dbc8f44-c54c-42c0-8430-742c6bb61165-logs\") pod \"6dbc8f44-c54c-42c0-8430-742c6bb61165\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675246 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-public-tls-certs\") pod \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhqdm\" (UniqueName: \"kubernetes.io/projected/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-kube-api-access-jhqdm\") pod \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675585 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcgq2\" (UniqueName: \"kubernetes.io/projected/4f99a57a-608b-4678-9be5-abc4347c8bcb-kube-api-access-mcgq2\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675615 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675626 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675636 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675644 4820 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675653 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.679704 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dbc8f44-c54c-42c0-8430-742c6bb61165-logs" (OuterVolumeSpecName: "logs") pod "6dbc8f44-c54c-42c0-8430-742c6bb61165" (UID: "6dbc8f44-c54c-42c0-8430-742c6bb61165"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.680066 4820 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.680161 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts podName:73b1b012-98c9-49cf-852d-a2ff95b746cf nodeName:}" failed. No retries permitted until 2026-02-21 07:10:04.680138474 +0000 UTC m=+1379.713222732 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts") pod "root-account-create-update-bcvpx" (UID: "73b1b012-98c9-49cf-852d-a2ff95b746cf") : configmap "openstack-scripts" not found Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.681411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-logs" (OuterVolumeSpecName: "logs") pod "0e16d52c-9322-49cf-9948-8d1c56c0a5ed" (UID: "0e16d52c-9322-49cf-9948-8d1c56c0a5ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.683944 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4709782f-54e7-4a78-a56e-8f58a5556501-logs" (OuterVolumeSpecName: "logs") pod "4709782f-54e7-4a78-a56e-8f58a5556501" (UID: "4709782f-54e7-4a78-a56e-8f58a5556501"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.686720 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f99a57a-608b-4678-9be5-abc4347c8bcb" (UID: "4f99a57a-608b-4678-9be5-abc4347c8bcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.686843 4820 scope.go:117] "RemoveContainer" containerID="d451738c8f6f4e609144531dffaae738937778e3a27f1cdf9e62e3a7d1480b96" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.687530 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" (UID: "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.729430 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-kube-api-access-jhqdm" (OuterVolumeSpecName: "kube-api-access-jhqdm") pod "0e16d52c-9322-49cf-9948-8d1c56c0a5ed" (UID: "0e16d52c-9322-49cf-9948-8d1c56c0a5ed"). InnerVolumeSpecName "kube-api-access-jhqdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.729556 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4709782f-54e7-4a78-a56e-8f58a5556501-kube-api-access-xmf6b" (OuterVolumeSpecName: "kube-api-access-xmf6b") pod "4709782f-54e7-4a78-a56e-8f58a5556501" (UID: "4709782f-54e7-4a78-a56e-8f58a5556501"). InnerVolumeSpecName "kube-api-access-xmf6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.729651 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4709782f-54e7-4a78-a56e-8f58a5556501" (UID: "4709782f-54e7-4a78-a56e-8f58a5556501"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.729855 4820 scope.go:117] "RemoveContainer" containerID="134fddb7ed13f71efcb8a67bce858e36224f138e4b68654fc6cd13c721b456f5" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.735069 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b1db760-d9fc-477f-bc0b-8119d247253b" path="/var/lib/kubelet/pods/7b1db760-d9fc-477f-bc0b-8119d247253b/volumes" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.735903 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8377d0c3-40a1-4a4a-b6c8-67f66dfa602d" path="/var/lib/kubelet/pods/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d/volumes" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.736564 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" path="/var/lib/kubelet/pods/9235cff6-e0e8-471a-9377-26dfcfd84dac/volumes" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.737982 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b81af4bd-d2af-4a26-8f4d-a3e612778607" path="/var/lib/kubelet/pods/b81af4bd-d2af-4a26-8f4d-a3e612778607/volumes" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.739330 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-config-data" (OuterVolumeSpecName: "config-data") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.746101 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.768474 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e16d52c-9322-49cf-9948-8d1c56c0a5ed" (UID: "0e16d52c-9322-49cf-9948-8d1c56c0a5ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.776454 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-combined-ca-bundle\") pod \"6dbc8f44-c54c-42c0-8430-742c6bb61165\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.776781 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-internal-tls-certs\") pod \"4709782f-54e7-4a78-a56e-8f58a5556501\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.777637 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-config-data\") pod \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.777804 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9wf5\" (UniqueName: \"kubernetes.io/projected/a112132d-4a29-460c-985d-b0ca2ddb1aa6-kube-api-access-c9wf5\") pod \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.777920 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a112132d-4a29-460c-985d-b0ca2ddb1aa6-logs\") pod \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.778002 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nbfq\" (UniqueName: \"kubernetes.io/projected/0ca75969-e299-435a-a607-d470d4ab831e-kube-api-access-4nbfq\") pod \"0ca75969-e299-435a-a607-d470d4ab831e\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.778086 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data\") pod \"6dbc8f44-c54c-42c0-8430-742c6bb61165\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.778182 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f88cm\" (UniqueName: \"kubernetes.io/projected/6dbc8f44-c54c-42c0-8430-742c6bb61165-kube-api-access-f88cm\") pod \"6dbc8f44-c54c-42c0-8430-742c6bb61165\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.778306 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-config-data\") pod \"0ca75969-e299-435a-a607-d470d4ab831e\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.778456 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-nova-metadata-tls-certs\") pod \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.778556 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-combined-ca-bundle\") pod \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.778649 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data-custom\") pod \"6dbc8f44-c54c-42c0-8430-742c6bb61165\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779285 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779415 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dbc8f44-c54c-42c0-8430-742c6bb61165-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779496 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhqdm\" (UniqueName: \"kubernetes.io/projected/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-kube-api-access-jhqdm\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779577 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779656 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779733 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmf6b\" (UniqueName: \"kubernetes.io/projected/4709782f-54e7-4a78-a56e-8f58a5556501-kube-api-access-xmf6b\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779800 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4709782f-54e7-4a78-a56e-8f58a5556501-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779881 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779991 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.780088 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.780164 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.785440 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a80b-account-create-update-w6rwf"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.785477 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a80b-account-create-update-w6rwf"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.785484 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4709782f-54e7-4a78-a56e-8f58a5556501" (UID: "4709782f-54e7-4a78-a56e-8f58a5556501"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.790302 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a112132d-4a29-460c-985d-b0ca2ddb1aa6-logs" (OuterVolumeSpecName: "logs") pod "a112132d-4a29-460c-985d-b0ca2ddb1aa6" (UID: "a112132d-4a29-460c-985d-b0ca2ddb1aa6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.792434 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.792487 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data podName:fa49984a-9511-4449-adc6-997899961f73 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:11.792471158 +0000 UTC m=+1386.825555356 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data") pod "rabbitmq-cell1-server-0" (UID: "fa49984a-9511-4449-adc6-997899961f73") : configmap "rabbitmq-cell1-config-data" not found Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.804148 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-cd19-account-create-update-77csv"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.816602 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6dbc8f44-c54c-42c0-8430-742c6bb61165" (UID: "6dbc8f44-c54c-42c0-8430-742c6bb61165"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.816619 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca75969-e299-435a-a607-d470d4ab831e-kube-api-access-4nbfq" (OuterVolumeSpecName: "kube-api-access-4nbfq") pod "0ca75969-e299-435a-a607-d470d4ab831e" (UID: "0ca75969-e299-435a-a607-d470d4ab831e"). InnerVolumeSpecName "kube-api-access-4nbfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.817784 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-cd19-account-create-update-77csv"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.833507 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c8ba-account-create-update-4wwws"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.835226 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dbc8f44-c54c-42c0-8430-742c6bb61165-kube-api-access-f88cm" (OuterVolumeSpecName: "kube-api-access-f88cm") pod "6dbc8f44-c54c-42c0-8430-742c6bb61165" (UID: "6dbc8f44-c54c-42c0-8430-742c6bb61165"). InnerVolumeSpecName "kube-api-access-f88cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.840232 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c8ba-account-create-update-4wwws"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.844497 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a112132d-4a29-460c-985d-b0ca2ddb1aa6-kube-api-access-c9wf5" (OuterVolumeSpecName: "kube-api-access-c9wf5") pod "a112132d-4a29-460c-985d-b0ca2ddb1aa6" (UID: "a112132d-4a29-460c-985d-b0ca2ddb1aa6"). InnerVolumeSpecName "kube-api-access-c9wf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.863650 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4e9a-account-create-update-4996c"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.864896 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "4f99a57a-608b-4678-9be5-abc4347c8bcb" (UID: "4f99a57a-608b-4678-9be5-abc4347c8bcb"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.871471 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4e9a-account-create-update-4996c"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883069 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883094 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883106 4820 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883115 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9wf5\" (UniqueName: \"kubernetes.io/projected/a112132d-4a29-460c-985d-b0ca2ddb1aa6-kube-api-access-c9wf5\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883128 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a112132d-4a29-460c-985d-b0ca2ddb1aa6-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883142 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nbfq\" (UniqueName: \"kubernetes.io/projected/0ca75969-e299-435a-a607-d470d4ab831e-kube-api-access-4nbfq\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883153 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f88cm\" (UniqueName: \"kubernetes.io/projected/6dbc8f44-c54c-42c0-8430-742c6bb61165-kube-api-access-f88cm\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.883242 4820 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.883295 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data podName:5916b629-5e69-4ad3-9180-c07181d3ff37 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:11.883282275 +0000 UTC m=+1386.916366473 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data") pod "barbican-keystone-listener-7b6747758b-gs56z" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37") : secret "barbican-config-data" not found Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883639 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c516-account-create-update-vrfb9"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.889766 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c516-account-create-update-vrfb9"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.891696 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dbc8f44-c54c-42c0-8430-742c6bb61165" (UID: "6dbc8f44-c54c-42c0-8430-742c6bb61165"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.903193 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ca75969-e299-435a-a607-d470d4ab831e" (UID: "0ca75969-e299-435a-a607-d470d4ab831e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.912874 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.930592 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.937129 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a112132d-4a29-460c-985d-b0ca2ddb1aa6" (UID: "a112132d-4a29-460c-985d-b0ca2ddb1aa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.939839 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-config-data" (OuterVolumeSpecName: "config-data") pod "0ca75969-e299-435a-a607-d470d4ab831e" (UID: "0ca75969-e299-435a-a607-d470d4ab831e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.954739 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-config-data" (OuterVolumeSpecName: "config-data") pod "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" (UID: "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.959283 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.966604 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.975628 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-config-data" (OuterVolumeSpecName: "config-data") pod "0e16d52c-9322-49cf-9948-8d1c56c0a5ed" (UID: "0e16d52c-9322-49cf-9948-8d1c56c0a5ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.976116 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-config-data" (OuterVolumeSpecName: "config-data") pod "a112132d-4a29-460c-985d-b0ca2ddb1aa6" (UID: "a112132d-4a29-460c-985d-b0ca2ddb1aa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.986191 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4709782f-54e7-4a78-a56e-8f58a5556501" (UID: "4709782f-54e7-4a78-a56e-8f58a5556501"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.988552 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989468 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989497 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989513 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989527 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989539 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989551 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989565 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989577 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989588 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.990145 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0e16d52c-9322-49cf-9948-8d1c56c0a5ed" (UID: "0e16d52c-9322-49cf-9948-8d1c56c0a5ed"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.999337 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4709782f-54e7-4a78-a56e-8f58a5556501" (UID: "4709782f-54e7-4a78-a56e-8f58a5556501"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.011916 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" (UID: "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.012100 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a112132d-4a29-460c-985d-b0ca2ddb1aa6" (UID: "a112132d-4a29-460c-985d-b0ca2ddb1aa6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.023267 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" (UID: "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.031610 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0e16d52c-9322-49cf-9948-8d1c56c0a5ed" (UID: "0e16d52c-9322-49cf-9948-8d1c56c0a5ed"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.037110 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data" (OuterVolumeSpecName: "config-data") pod "6dbc8f44-c54c-42c0-8430-742c6bb61165" (UID: "6dbc8f44-c54c-42c0-8430-742c6bb61165"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.053651 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data" (OuterVolumeSpecName: "config-data") pod "4709782f-54e7-4a78-a56e-8f58a5556501" (UID: "4709782f-54e7-4a78-a56e-8f58a5556501"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.061128 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.074628 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096814 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096844 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096856 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096865 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096877 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096889 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096923 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096931 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.170699 4820 generic.go:334] "Generic (PLEG): container finished" podID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerID="5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1" exitCode=143 Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.170764 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" event={"ID":"5916b629-5e69-4ad3-9180-c07181d3ff37","Type":"ContainerDied","Data":"5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1"} Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.180525 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0ca75969-e299-435a-a607-d470d4ab831e","Type":"ContainerDied","Data":"fc21a0a7c4dd2451190e354831336d49dba3efa2b6ff9cf991a583d8861094cf"} Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.180575 4820 scope.go:117] "RemoveContainer" containerID="f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.180775 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.197769 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" event={"ID":"6dbc8f44-c54c-42c0-8430-742c6bb61165","Type":"ContainerDied","Data":"9e23535ae9303b01da633c9a5de5b1cca080fe7244d856307bd78e440fdb1a72"} Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.197866 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.200007 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.200324 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.200405 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.200971 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.201141 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.201394 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.201405 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.205327 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.238629 4820 scope.go:117] "RemoveContainer" containerID="df3a8b6f8128140f50c80025c22d3b291ab89d34796d0307384acb7c6dbbcc96" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.277685 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b298-account-create-update-hxmxb"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.286820 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b298-account-create-update-hxmxb"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.294622 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.309564 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.309601 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7t2x\" (UniqueName: \"kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.316572 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.209:3000/\": dial tcp 10.217.0.209:3000: connect: connection refused" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.319299 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.400016 4820 scope.go:117] "RemoveContainer" containerID="3778b0182306b15cbf9e09e147e68dd7624053483e32182b3d2bbe64c15bf395" Feb 21 07:10:04 crc kubenswrapper[4820]: E0221 07:10:04.421525 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:10:04 crc kubenswrapper[4820]: E0221 07:10:04.424081 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:10:04 crc kubenswrapper[4820]: E0221 07:10:04.427116 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:10:04 crc kubenswrapper[4820]: E0221 07:10:04.427160 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="061bac4c-22ff-4144-b114-133ea89494c8" containerName="nova-cell1-conductor-conductor" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.443130 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.454516 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.471448 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76b79c9766-s694g"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.482686 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-76b79c9766-s694g"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.522468 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85cb846b98-bwgbn"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.532043 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-85cb846b98-bwgbn"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.540535 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.548274 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.554317 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.560083 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.565964 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.572405 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.577763 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-79b8cb94b4-h6tqh"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.586538 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-79b8cb94b4-h6tqh"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.594390 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.602573 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.641641 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bcvpx" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.720636 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/73b1b012-98c9-49cf-852d-a2ff95b746cf-kube-api-access-8btrq\") pod \"73b1b012-98c9-49cf-852d-a2ff95b746cf\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.720699 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts\") pod \"73b1b012-98c9-49cf-852d-a2ff95b746cf\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.721505 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73b1b012-98c9-49cf-852d-a2ff95b746cf" (UID: "73b1b012-98c9-49cf-852d-a2ff95b746cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.724770 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b1b012-98c9-49cf-852d-a2ff95b746cf-kube-api-access-8btrq" (OuterVolumeSpecName: "kube-api-access-8btrq") pod "73b1b012-98c9-49cf-852d-a2ff95b746cf" (UID: "73b1b012-98c9-49cf-852d-a2ff95b746cf"). InnerVolumeSpecName "kube-api-access-8btrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.822371 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/73b1b012-98c9-49cf-852d-a2ff95b746cf-kube-api-access-8btrq\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.822880 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.214709 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.214910 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bcvpx" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.214969 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bcvpx" event={"ID":"73b1b012-98c9-49cf-852d-a2ff95b746cf","Type":"ContainerDied","Data":"e23762ffd7ce106b9f82fdb1d0d30eef475c43de4d355359cab19dd81674c400"} Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.215018 4820 scope.go:117] "RemoveContainer" containerID="8e46bf988fca88e52a41735046800e6ec7c614c220632634b1037bebf8ce17a8" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.216613 4820 generic.go:334] "Generic (PLEG): container finished" podID="16ebfdb2-72a8-40c6-b0ed-012f138025b2" containerID="200807455a2947c5b934674313e4af887e6f6944441305fbe4c73423e4c5c754" exitCode=0 Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.216660 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-665c5b9dff-g2t96" event={"ID":"16ebfdb2-72a8-40c6-b0ed-012f138025b2","Type":"ContainerDied","Data":"200807455a2947c5b934674313e4af887e6f6944441305fbe4c73423e4c5c754"} Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.223534 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa49984a-9511-4449-adc6-997899961f73" containerID="7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078" exitCode=0 Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.223584 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa49984a-9511-4449-adc6-997899961f73","Type":"ContainerDied","Data":"7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078"} Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.223620 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa49984a-9511-4449-adc6-997899961f73","Type":"ContainerDied","Data":"c7e2b7a7c0a492a7d1fe2c8d85d83a8801b3d4fa1ad893af52ea27c7826ffccc"} Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.223733 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.245623 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.245865 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.245918 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-plugins\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246026 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-plugins-conf\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246063 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa49984a-9511-4449-adc6-997899961f73-erlang-cookie-secret\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246126 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-tls\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246179 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbf58\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-kube-api-access-cbf58\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246233 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-confd\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246346 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-erlang-cookie\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246380 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa49984a-9511-4449-adc6-997899961f73-pod-info\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246415 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-server-conf\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.247283 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.248110 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.248940 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.258654 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.259318 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/fa49984a-9511-4449-adc6-997899961f73-pod-info" (OuterVolumeSpecName: "pod-info") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.262118 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.266749 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-kube-api-access-cbf58" (OuterVolumeSpecName: "kube-api-access-cbf58") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "kube-api-access-cbf58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.275183 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa49984a-9511-4449-adc6-997899961f73-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.293070 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data" (OuterVolumeSpecName: "config-data") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.340026 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-server-conf" (OuterVolumeSpecName: "server-conf") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.348958 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.348998 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349009 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349018 4820 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349026 4820 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa49984a-9511-4449-adc6-997899961f73-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349033 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349042 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbf58\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-kube-api-access-cbf58\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349050 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349058 4820 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa49984a-9511-4449-adc6-997899961f73-pod-info\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349066 4820 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-server-conf\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.367556 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.378252 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.395085 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.398375 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bcvpx"] Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.405411 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bcvpx"] Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.449860 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-combined-ca-bundle\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.449931 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-fernet-keys\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450035 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-public-tls-certs\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450067 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-internal-tls-certs\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450096 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-config-data\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450177 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-credential-keys\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450287 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-scripts\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450345 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzm7j\" (UniqueName: \"kubernetes.io/projected/16ebfdb2-72a8-40c6-b0ed-012f138025b2-kube-api-access-gzm7j\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450773 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450786 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: E0221 07:10:05.450867 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 21 07:10:05 crc kubenswrapper[4820]: E0221 07:10:05.450932 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data podName:8b1242f9-d2ac-493c-bc89-43f7be597a75 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:13.450917542 +0000 UTC m=+1388.484001740 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data") pod "rabbitmq-server-0" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75") : configmap "rabbitmq-config-data" not found Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.453064 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.453919 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-scripts" (OuterVolumeSpecName: "scripts") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.459124 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.461842 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ebfdb2-72a8-40c6-b0ed-012f138025b2-kube-api-access-gzm7j" (OuterVolumeSpecName: "kube-api-access-gzm7j") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "kube-api-access-gzm7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.462000 4820 scope.go:117] "RemoveContainer" containerID="7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.475372 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-config-data" (OuterVolumeSpecName: "config-data") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.482809 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.498586 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.505427 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.521563 4820 scope.go:117] "RemoveContainer" containerID="946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.551923 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.551952 4820 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.551963 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.551971 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzm7j\" (UniqueName: \"kubernetes.io/projected/16ebfdb2-72a8-40c6-b0ed-012f138025b2-kube-api-access-gzm7j\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.551980 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.551988 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.551996 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.552003 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.595144 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.603432 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a5b71e95-fe49-48b2-8d7b-575e17855d52/ovn-northd/0.log" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.603503 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.603949 4820 scope.go:117] "RemoveContainer" containerID="7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078" Feb 21 07:10:05 crc kubenswrapper[4820]: E0221 07:10:05.604202 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078\": container with ID starting with 7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078 not found: ID does not exist" containerID="7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.604228 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078"} err="failed to get container status \"7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078\": rpc error: code = NotFound desc = could not find container \"7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078\": container with ID starting with 7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078 not found: ID does not exist" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.604286 4820 scope.go:117] "RemoveContainer" containerID="946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc" Feb 21 07:10:05 crc kubenswrapper[4820]: E0221 07:10:05.604555 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc\": container with ID starting with 946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc not found: ID does not exist" containerID="946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.604583 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc"} err="failed to get container status \"946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc\": rpc error: code = NotFound desc = could not find container \"946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc\": container with ID starting with 946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc not found: ID does not exist" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.604834 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.663418 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-metrics-certs-tls-certs\") pod \"a5b71e95-fe49-48b2-8d7b-575e17855d52\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.663464 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-scripts\") pod \"a5b71e95-fe49-48b2-8d7b-575e17855d52\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.663513 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-config\") pod \"a5b71e95-fe49-48b2-8d7b-575e17855d52\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.663532 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgjgb\" (UniqueName: \"kubernetes.io/projected/a5b71e95-fe49-48b2-8d7b-575e17855d52-kube-api-access-zgjgb\") pod \"a5b71e95-fe49-48b2-8d7b-575e17855d52\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.663561 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-northd-tls-certs\") pod \"a5b71e95-fe49-48b2-8d7b-575e17855d52\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.663609 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-combined-ca-bundle\") pod \"a5b71e95-fe49-48b2-8d7b-575e17855d52\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.663720 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-rundir\") pod \"a5b71e95-fe49-48b2-8d7b-575e17855d52\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.664473 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "a5b71e95-fe49-48b2-8d7b-575e17855d52" (UID: "a5b71e95-fe49-48b2-8d7b-575e17855d52"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.666686 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-scripts" (OuterVolumeSpecName: "scripts") pod "a5b71e95-fe49-48b2-8d7b-575e17855d52" (UID: "a5b71e95-fe49-48b2-8d7b-575e17855d52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.666699 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-config" (OuterVolumeSpecName: "config") pod "a5b71e95-fe49-48b2-8d7b-575e17855d52" (UID: "a5b71e95-fe49-48b2-8d7b-575e17855d52"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.682754 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5b71e95-fe49-48b2-8d7b-575e17855d52-kube-api-access-zgjgb" (OuterVolumeSpecName: "kube-api-access-zgjgb") pod "a5b71e95-fe49-48b2-8d7b-575e17855d52" (UID: "a5b71e95-fe49-48b2-8d7b-575e17855d52"). InnerVolumeSpecName "kube-api-access-zgjgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.706704 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ca75969-e299-435a-a607-d470d4ab831e" path="/var/lib/kubelet/pods/0ca75969-e299-435a-a607-d470d4ab831e/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.709492 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" path="/var/lib/kubelet/pods/0e16d52c-9322-49cf-9948-8d1c56c0a5ed/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.710133 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fa0449e-f842-4605-b814-1e7ede08a5b7" path="/var/lib/kubelet/pods/0fa0449e-f842-4605-b814-1e7ede08a5b7/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.710622 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" path="/var/lib/kubelet/pods/4709782f-54e7-4a78-a56e-8f58a5556501/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.711914 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f99a57a-608b-4678-9be5-abc4347c8bcb" path="/var/lib/kubelet/pods/4f99a57a-608b-4678-9be5-abc4347c8bcb/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.712387 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5869267a-13d5-4879-a3b0-d0e12ee57b8c" path="/var/lib/kubelet/pods/5869267a-13d5-4879-a3b0-d0e12ee57b8c/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.712756 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67b282c5-1012-4188-bc31-b8e7e794bb77" path="/var/lib/kubelet/pods/67b282c5-1012-4188-bc31-b8e7e794bb77/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.713187 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" path="/var/lib/kubelet/pods/6dbc8f44-c54c-42c0-8430-742c6bb61165/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.714745 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fbdfb60-d58f-4949-a33c-f17e9ea2cd05" path="/var/lib/kubelet/pods/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.715229 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" path="/var/lib/kubelet/pods/73b1b012-98c9-49cf-852d-a2ff95b746cf/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.716111 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899bd84b-c67f-4a89-9f92-a68094530566" path="/var/lib/kubelet/pods/899bd84b-c67f-4a89-9f92-a68094530566/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.717874 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c841249-7293-4826-b05f-e4a189aaef07" path="/var/lib/kubelet/pods/8c841249-7293-4826-b05f-e4a189aaef07/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.718438 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95200e0a-ca93-4303-80af-8b950ddc8746" path="/var/lib/kubelet/pods/95200e0a-ca93-4303-80af-8b950ddc8746/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.719206 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" path="/var/lib/kubelet/pods/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.720044 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb570ff-2a5e-4913-a84f-346579eaa104" path="/var/lib/kubelet/pods/9eb570ff-2a5e-4913-a84f-346579eaa104/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.721539 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" path="/var/lib/kubelet/pods/a112132d-4a29-460c-985d-b0ca2ddb1aa6/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.722215 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" path="/var/lib/kubelet/pods/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.722846 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed145514-af37-491d-bc62-2f84273b4fd0" path="/var/lib/kubelet/pods/ed145514-af37-491d-bc62-2f84273b4fd0/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.724035 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" path="/var/lib/kubelet/pods/ef3827c2-ee55-4f86-a752-d7cbc9c6454e/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.725189 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa49984a-9511-4449-adc6-997899961f73" path="/var/lib/kubelet/pods/fa49984a-9511-4449-adc6-997899961f73/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.745334 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "a5b71e95-fe49-48b2-8d7b-575e17855d52" (UID: "a5b71e95-fe49-48b2-8d7b-575e17855d52"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.747575 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5b71e95-fe49-48b2-8d7b-575e17855d52" (UID: "a5b71e95-fe49-48b2-8d7b-575e17855d52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.765426 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.765458 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.765467 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgjgb\" (UniqueName: \"kubernetes.io/projected/a5b71e95-fe49-48b2-8d7b-575e17855d52-kube-api-access-zgjgb\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.765477 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.765487 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.765495 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.783596 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a5b71e95-fe49-48b2-8d7b-575e17855d52" (UID: "a5b71e95-fe49-48b2-8d7b-575e17855d52"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: E0221 07:10:05.798755 4820 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 21 07:10:05 crc kubenswrapper[4820]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-02-21T07:09:58Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 21 07:10:05 crc kubenswrapper[4820]: /etc/init.d/functions: line 589: 449 Alarm clock "$@" Feb 21 07:10:05 crc kubenswrapper[4820]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-sfpp9" message=< Feb 21 07:10:05 crc kubenswrapper[4820]: Exiting ovn-controller (1) [FAILED] Feb 21 07:10:05 crc kubenswrapper[4820]: Killing ovn-controller (1) [ OK ] Feb 21 07:10:05 crc kubenswrapper[4820]: Killing ovn-controller (1) with SIGKILL [ OK ] Feb 21 07:10:05 crc kubenswrapper[4820]: 2026-02-21T07:09:58Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 21 07:10:05 crc kubenswrapper[4820]: /etc/init.d/functions: line 589: 449 Alarm clock "$@" Feb 21 07:10:05 crc kubenswrapper[4820]: > Feb 21 07:10:05 crc kubenswrapper[4820]: E0221 07:10:05.798794 4820 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 21 07:10:05 crc kubenswrapper[4820]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-02-21T07:09:58Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 21 07:10:05 crc kubenswrapper[4820]: /etc/init.d/functions: line 589: 449 Alarm clock "$@" Feb 21 07:10:05 crc kubenswrapper[4820]: > pod="openstack/ovn-controller-sfpp9" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" containerID="cri-o://baa7cece2ce256578638bf4f6a5bc9638afee7fd94bd34c74a485d35c9ac1293" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.798831 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-sfpp9" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" containerID="cri-o://baa7cece2ce256578638bf4f6a5bc9638afee7fd94bd34c74a485d35c9ac1293" gracePeriod=22 Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.813471 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866321 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-erlang-cookie\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866711 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b1242f9-d2ac-493c-bc89-43f7be597a75-erlang-cookie-secret\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866747 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-tls\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866775 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-plugins\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866813 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866848 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b1242f9-d2ac-493c-bc89-43f7be597a75-pod-info\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866875 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-plugins-conf\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866927 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9gg2\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-kube-api-access-k9gg2\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866951 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.867000 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-server-conf\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.867035 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.867367 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.867718 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.871073 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.871084 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.883014 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.884353 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8b1242f9-d2ac-493c-bc89-43f7be597a75-pod-info" (OuterVolumeSpecName: "pod-info") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.884422 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1242f9-d2ac-493c-bc89-43f7be597a75-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.884621 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-kube-api-access-k9gg2" (OuterVolumeSpecName: "kube-api-access-k9gg2") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "kube-api-access-k9gg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.893894 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data" (OuterVolumeSpecName: "config-data") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.905182 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.954022 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-server-conf" (OuterVolumeSpecName: "server-conf") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.968984 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969277 4820 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b1242f9-d2ac-493c-bc89-43f7be597a75-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969361 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969440 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969535 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969608 4820 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b1242f9-d2ac-493c-bc89-43f7be597a75-pod-info\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969676 4820 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969811 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9gg2\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-kube-api-access-k9gg2\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969914 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969987 4820 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-server-conf\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.989359 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.079104 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.079775 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:06 crc kubenswrapper[4820]: W0221 07:10:06.079905 4820 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8b1242f9-d2ac-493c-bc89-43f7be597a75/volumes/kubernetes.io~projected/rabbitmq-confd Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.079999 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.080376 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.080466 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.234269 4820 generic.go:334] "Generic (PLEG): container finished" podID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerID="0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f" exitCode=0 Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.234392 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.234401 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b1242f9-d2ac-493c-bc89-43f7be597a75","Type":"ContainerDied","Data":"0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.234519 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b1242f9-d2ac-493c-bc89-43f7be597a75","Type":"ContainerDied","Data":"77697e6f65480c0a8c7ecc85d340b2d52d583c5d92b5093accb994850dd6cd98"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.234544 4820 scope.go:117] "RemoveContainer" containerID="0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.236288 4820 generic.go:334] "Generic (PLEG): container finished" podID="061bac4c-22ff-4144-b114-133ea89494c8" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" exitCode=0 Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.236350 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"061bac4c-22ff-4144-b114-133ea89494c8","Type":"ContainerDied","Data":"4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.239281 4820 generic.go:334] "Generic (PLEG): container finished" podID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerID="8ea9d572727a93891412c9eefb51f0b89a90a953470d2aea7e3c780c0bab4fc7" exitCode=0 Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.239350 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c6905da-351a-426d-a36c-0b05dfa993a9","Type":"ContainerDied","Data":"8ea9d572727a93891412c9eefb51f0b89a90a953470d2aea7e3c780c0bab4fc7"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.241435 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a5b71e95-fe49-48b2-8d7b-575e17855d52/ovn-northd/0.log" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.241467 4820 generic.go:334] "Generic (PLEG): container finished" podID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerID="803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" exitCode=139 Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.241509 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a5b71e95-fe49-48b2-8d7b-575e17855d52","Type":"ContainerDied","Data":"803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.241529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a5b71e95-fe49-48b2-8d7b-575e17855d52","Type":"ContainerDied","Data":"604dd0f90d347bd1d64b0d2191df0d507c4aabc32e0be6179ae2446497d41fb2"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.241578 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.243828 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sfpp9_593c6a26-a16a-4cf6-8aa9-b20bb6d56da7/ovn-controller/0.log" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.243876 4820 generic.go:334] "Generic (PLEG): container finished" podID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerID="baa7cece2ce256578638bf4f6a5bc9638afee7fd94bd34c74a485d35c9ac1293" exitCode=137 Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.243934 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9" event={"ID":"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7","Type":"ContainerDied","Data":"baa7cece2ce256578638bf4f6a5bc9638afee7fd94bd34c74a485d35c9ac1293"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.243971 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9" event={"ID":"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7","Type":"ContainerDied","Data":"a93a231ccce463244e328090dedc1dcb1c07884205498f2d63cc04feabadacfe"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.243988 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a93a231ccce463244e328090dedc1dcb1c07884205498f2d63cc04feabadacfe" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.246612 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-665c5b9dff-g2t96" event={"ID":"16ebfdb2-72a8-40c6-b0ed-012f138025b2","Type":"ContainerDied","Data":"64f0896a03976792d3631a63a19b92a0be5d44121ab07ab2ac5e458129f71510"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.246713 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.256298 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sfpp9_593c6a26-a16a-4cf6-8aa9-b20bb6d56da7/ovn-controller/0.log" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.256373 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.259283 4820 scope.go:117] "RemoveContainer" containerID="b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.281595 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-665c5b9dff-g2t96"] Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.302841 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-665c5b9dff-g2t96"] Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.322324 4820 scope.go:117] "RemoveContainer" containerID="0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f" Feb 21 07:10:06 crc kubenswrapper[4820]: E0221 07:10:06.322662 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f\": container with ID starting with 0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f not found: ID does not exist" containerID="0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.322693 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f"} err="failed to get container status \"0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f\": rpc error: code = NotFound desc = could not find container \"0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f\": container with ID starting with 0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f not found: ID does not exist" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.322711 4820 scope.go:117] "RemoveContainer" containerID="b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012" Feb 21 07:10:06 crc kubenswrapper[4820]: E0221 07:10:06.322948 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012\": container with ID starting with b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012 not found: ID does not exist" containerID="b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.322965 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012"} err="failed to get container status \"b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012\": rpc error: code = NotFound desc = could not find container \"b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012\": container with ID starting with b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012 not found: ID does not exist" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.322976 4820 scope.go:117] "RemoveContainer" containerID="0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.337900 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.346930 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.353861 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.375976 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.386431 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-combined-ca-bundle\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.386509 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmn4m\" (UniqueName: \"kubernetes.io/projected/6c6905da-351a-426d-a36c-0b05dfa993a9-kube-api-access-pmn4m\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.386543 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxmtq\" (UniqueName: \"kubernetes.io/projected/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-kube-api-access-hxmtq\") pod \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.387810 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-galera-tls-certs\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.387968 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-combined-ca-bundle\") pod \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388016 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-kolla-config\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388047 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388082 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-ovn-controller-tls-certs\") pod \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388111 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-log-ovn\") pod \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388136 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-default\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388238 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-generated\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388288 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-operator-scripts\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388320 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run-ovn\") pod \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388348 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-scripts\") pod \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388381 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run\") pod \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.389338 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run" (OuterVolumeSpecName: "var-run") pod "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" (UID: "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.390389 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.390468 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" (UID: "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.391187 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.391253 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.391318 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" (UID: "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.391924 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.393017 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-scripts" (OuterVolumeSpecName: "scripts") pod "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" (UID: "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.393165 4820 scope.go:117] "RemoveContainer" containerID="803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.395012 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.406581 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-kube-api-access-hxmtq" (OuterVolumeSpecName: "kube-api-access-hxmtq") pod "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" (UID: "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7"). InnerVolumeSpecName "kube-api-access-hxmtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.408998 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6905da-351a-426d-a36c-0b05dfa993a9-kube-api-access-pmn4m" (OuterVolumeSpecName: "kube-api-access-pmn4m") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "kube-api-access-pmn4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.424840 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "mysql-db") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.429208 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.437598 4820 scope.go:117] "RemoveContainer" containerID="0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d" Feb 21 07:10:06 crc kubenswrapper[4820]: E0221 07:10:06.437973 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d\": container with ID starting with 0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d not found: ID does not exist" containerID="0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.438025 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d"} err="failed to get container status \"0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d\": rpc error: code = NotFound desc = could not find container \"0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d\": container with ID starting with 0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d not found: ID does not exist" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.438060 4820 scope.go:117] "RemoveContainer" containerID="803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" Feb 21 07:10:06 crc kubenswrapper[4820]: E0221 07:10:06.438418 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30\": container with ID starting with 803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30 not found: ID does not exist" containerID="803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.438454 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30"} err="failed to get container status \"803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30\": rpc error: code = NotFound desc = could not find container \"803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30\": container with ID starting with 803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30 not found: ID does not exist" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.438481 4820 scope.go:117] "RemoveContainer" containerID="200807455a2947c5b934674313e4af887e6f6944441305fbe4c73423e4c5c754" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.454009 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" (UID: "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.484545 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491353 4820 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491377 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491388 4820 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491399 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491413 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmn4m\" (UniqueName: \"kubernetes.io/projected/6c6905da-351a-426d-a36c-0b05dfa993a9-kube-api-access-pmn4m\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491423 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxmtq\" (UniqueName: \"kubernetes.io/projected/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-kube-api-access-hxmtq\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491431 4820 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491438 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491446 4820 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491463 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491472 4820 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491483 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491492 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491500 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.509031 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.523312 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.559982 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" (UID: "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.593206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr5zf\" (UniqueName: \"kubernetes.io/projected/061bac4c-22ff-4144-b114-133ea89494c8-kube-api-access-vr5zf\") pod \"061bac4c-22ff-4144-b114-133ea89494c8\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.593284 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-config-data\") pod \"061bac4c-22ff-4144-b114-133ea89494c8\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.593324 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-combined-ca-bundle\") pod \"061bac4c-22ff-4144-b114-133ea89494c8\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.593604 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.593617 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.606014 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/061bac4c-22ff-4144-b114-133ea89494c8-kube-api-access-vr5zf" (OuterVolumeSpecName: "kube-api-access-vr5zf") pod "061bac4c-22ff-4144-b114-133ea89494c8" (UID: "061bac4c-22ff-4144-b114-133ea89494c8"). InnerVolumeSpecName "kube-api-access-vr5zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.619909 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-config-data" (OuterVolumeSpecName: "config-data") pod "061bac4c-22ff-4144-b114-133ea89494c8" (UID: "061bac4c-22ff-4144-b114-133ea89494c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.622324 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "061bac4c-22ff-4144-b114-133ea89494c8" (UID: "061bac4c-22ff-4144-b114-133ea89494c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.694498 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr5zf\" (UniqueName: \"kubernetes.io/projected/061bac4c-22ff-4144-b114-133ea89494c8-kube-api-access-vr5zf\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.694519 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.694527 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.039887 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.046534 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100581 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42ba382-9e03-4f39-904e-87f4d764175c-logs\") pod \"f42ba382-9e03-4f39-904e-87f4d764175c\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100616 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data-custom\") pod \"f42ba382-9e03-4f39-904e-87f4d764175c\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100647 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data\") pod \"f42ba382-9e03-4f39-904e-87f4d764175c\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100689 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-log-httpd\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100714 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-ceilometer-tls-certs\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100841 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-run-httpd\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100864 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-scripts\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100891 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz8jf\" (UniqueName: \"kubernetes.io/projected/0a392f2a-5040-417a-b860-13fa886ea2a2-kube-api-access-wz8jf\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100905 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-sg-core-conf-yaml\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100923 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29x9d\" (UniqueName: \"kubernetes.io/projected/f42ba382-9e03-4f39-904e-87f4d764175c-kube-api-access-29x9d\") pod \"f42ba382-9e03-4f39-904e-87f4d764175c\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100943 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-combined-ca-bundle\") pod \"f42ba382-9e03-4f39-904e-87f4d764175c\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100958 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-combined-ca-bundle\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100977 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-config-data\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.101627 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.101839 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f42ba382-9e03-4f39-904e-87f4d764175c-logs" (OuterVolumeSpecName: "logs") pod "f42ba382-9e03-4f39-904e-87f4d764175c" (UID: "f42ba382-9e03-4f39-904e-87f4d764175c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.110971 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.117414 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f42ba382-9e03-4f39-904e-87f4d764175c-kube-api-access-29x9d" (OuterVolumeSpecName: "kube-api-access-29x9d") pod "f42ba382-9e03-4f39-904e-87f4d764175c" (UID: "f42ba382-9e03-4f39-904e-87f4d764175c"). InnerVolumeSpecName "kube-api-access-29x9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.118388 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f42ba382-9e03-4f39-904e-87f4d764175c" (UID: "f42ba382-9e03-4f39-904e-87f4d764175c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.119325 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-scripts" (OuterVolumeSpecName: "scripts") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.136373 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a392f2a-5040-417a-b860-13fa886ea2a2-kube-api-access-wz8jf" (OuterVolumeSpecName: "kube-api-access-wz8jf") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "kube-api-access-wz8jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.142053 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.150306 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.151553 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f42ba382-9e03-4f39-904e-87f4d764175c" (UID: "f42ba382-9e03-4f39-904e-87f4d764175c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.175997 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data" (OuterVolumeSpecName: "config-data") pod "f42ba382-9e03-4f39-904e-87f4d764175c" (UID: "f42ba382-9e03-4f39-904e-87f4d764175c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.179613 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-config-data" (OuterVolumeSpecName: "config-data") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.180122 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213052 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213095 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213105 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz8jf\" (UniqueName: \"kubernetes.io/projected/0a392f2a-5040-417a-b860-13fa886ea2a2-kube-api-access-wz8jf\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213115 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213124 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29x9d\" (UniqueName: \"kubernetes.io/projected/f42ba382-9e03-4f39-904e-87f4d764175c-kube-api-access-29x9d\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213133 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213141 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213149 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213157 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42ba382-9e03-4f39-904e-87f4d764175c-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213166 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213174 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213181 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213189 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.265456 4820 generic.go:334] "Generic (PLEG): container finished" podID="f42ba382-9e03-4f39-904e-87f4d764175c" containerID="53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5" exitCode=0 Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.265505 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-867cbf55-jx754" event={"ID":"f42ba382-9e03-4f39-904e-87f4d764175c","Type":"ContainerDied","Data":"53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5"} Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.265529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-867cbf55-jx754" event={"ID":"f42ba382-9e03-4f39-904e-87f4d764175c","Type":"ContainerDied","Data":"232aac902ab163c61332ca9251f3b8bd22a0d25dd116a7153f1bb796d475d539"} Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.265544 4820 scope.go:117] "RemoveContainer" containerID="53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.265630 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.274464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"061bac4c-22ff-4144-b114-133ea89494c8","Type":"ContainerDied","Data":"167b5165b4391c8783b551aad0df3cc918db35e3f8cb50ff81e948ca2a961b4f"} Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.274553 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.286885 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c6905da-351a-426d-a36c-0b05dfa993a9","Type":"ContainerDied","Data":"506d7091e1481dd403657fac413ff300e649bdb874981551b296a055c67d3957"} Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.287022 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.304486 4820 scope.go:117] "RemoveContainer" containerID="cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.329485 4820 generic.go:334] "Generic (PLEG): container finished" podID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerID="e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3" exitCode=0 Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.329589 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.330338 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerDied","Data":"e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3"} Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.330379 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerDied","Data":"0c38be7124a920b640712dd690755259fce0c90bcf50290cc80460e97c079adc"} Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.330434 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.336303 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-867cbf55-jx754"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.366660 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-867cbf55-jx754"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.385455 4820 scope.go:117] "RemoveContainer" containerID="53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.398007 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5\": container with ID starting with 53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5 not found: ID does not exist" containerID="53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.398045 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5"} err="failed to get container status \"53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5\": rpc error: code = NotFound desc = could not find container \"53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5\": container with ID starting with 53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5 not found: ID does not exist" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.398068 4820 scope.go:117] "RemoveContainer" containerID="cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.400536 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d\": container with ID starting with cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d not found: ID does not exist" containerID="cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.400572 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d"} err="failed to get container status \"cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d\": rpc error: code = NotFound desc = could not find container \"cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d\": container with ID starting with cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d not found: ID does not exist" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.400597 4820 scope.go:117] "RemoveContainer" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.407301 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.419832 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.433169 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.441036 4820 scope.go:117] "RemoveContainer" containerID="8ea9d572727a93891412c9eefb51f0b89a90a953470d2aea7e3c780c0bab4fc7" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.441164 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.469895 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.479452 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.480402 4820 scope.go:117] "RemoveContainer" containerID="5198d061e257c6bdda5bc9f71cfa5143331f9afe3dc440aebe7e8c90c90675cf" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.494306 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sfpp9"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.504686 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sfpp9"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.517675 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="4f99a57a-608b-4678-9be5-abc4347c8bcb" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.106:11211: i/o timeout" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.518931 4820 scope.go:117] "RemoveContainer" containerID="eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.541958 4820 scope.go:117] "RemoveContainer" containerID="c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.561412 4820 scope.go:117] "RemoveContainer" containerID="e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.602557 4820 scope.go:117] "RemoveContainer" containerID="28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.626664 4820 scope.go:117] "RemoveContainer" containerID="eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.627064 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d\": container with ID starting with eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d not found: ID does not exist" containerID="eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.627092 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d"} err="failed to get container status \"eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d\": rpc error: code = NotFound desc = could not find container \"eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d\": container with ID starting with eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d not found: ID does not exist" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.627113 4820 scope.go:117] "RemoveContainer" containerID="c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.627456 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208\": container with ID starting with c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208 not found: ID does not exist" containerID="c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.627503 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208"} err="failed to get container status \"c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208\": rpc error: code = NotFound desc = could not find container \"c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208\": container with ID starting with c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208 not found: ID does not exist" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.627536 4820 scope.go:117] "RemoveContainer" containerID="e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.627874 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3\": container with ID starting with e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3 not found: ID does not exist" containerID="e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.627902 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3"} err="failed to get container status \"e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3\": rpc error: code = NotFound desc = could not find container \"e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3\": container with ID starting with e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3 not found: ID does not exist" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.627917 4820 scope.go:117] "RemoveContainer" containerID="28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.628334 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6\": container with ID starting with 28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6 not found: ID does not exist" containerID="28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.628355 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6"} err="failed to get container status \"28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6\": rpc error: code = NotFound desc = could not find container \"28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6\": container with ID starting with 28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6 not found: ID does not exist" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.709374 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="061bac4c-22ff-4144-b114-133ea89494c8" path="/var/lib/kubelet/pods/061bac4c-22ff-4144-b114-133ea89494c8/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.709894 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" path="/var/lib/kubelet/pods/0a392f2a-5040-417a-b860-13fa886ea2a2/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.710549 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ebfdb2-72a8-40c6-b0ed-012f138025b2" path="/var/lib/kubelet/pods/16ebfdb2-72a8-40c6-b0ed-012f138025b2/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.711628 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" path="/var/lib/kubelet/pods/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.712198 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c6905da-351a-426d-a36c-0b05dfa993a9" path="/var/lib/kubelet/pods/6c6905da-351a-426d-a36c-0b05dfa993a9/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.713369 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" path="/var/lib/kubelet/pods/8b1242f9-d2ac-493c-bc89-43f7be597a75/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.713996 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" path="/var/lib/kubelet/pods/a5b71e95-fe49-48b2-8d7b-575e17855d52/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.714666 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" path="/var/lib/kubelet/pods/f42ba382-9e03-4f39-904e-87f4d764175c/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.944341 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.944849 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.945285 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.945318 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.945975 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.947311 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.948363 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.948395 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:10:11 crc kubenswrapper[4820]: E0221 07:10:11.893808 4820 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 21 07:10:11 crc kubenswrapper[4820]: E0221 07:10:11.893896 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data podName:5916b629-5e69-4ad3-9180-c07181d3ff37 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:27.893879365 +0000 UTC m=+1402.926963563 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data") pod "barbican-keystone-listener-7b6747758b-gs56z" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37") : secret "barbican-config-data" not found Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.946378 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.947366 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.948771 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.949378 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.949495 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.949950 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.952568 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.952701 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.944077 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.944726 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.945100 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.945132 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.945772 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.947098 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.948201 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.948230 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.462008 4820 generic.go:334] "Generic (PLEG): container finished" podID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerID="cbde025c9fa7d22d168b54e6b8a411d4937140bd66d43a2f8ef9982aa91aa117" exitCode=0 Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.462097 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7796b97765-sqvtc" event={"ID":"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d","Type":"ContainerDied","Data":"cbde025c9fa7d22d168b54e6b8a411d4937140bd66d43a2f8ef9982aa91aa117"} Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.462627 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7796b97765-sqvtc" event={"ID":"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d","Type":"ContainerDied","Data":"11c093e11abcb295098b0a4ebd02622476fcadbf35b1cbecc53f2deb5b20c639"} Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.462645 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11c093e11abcb295098b0a4ebd02622476fcadbf35b1cbecc53f2deb5b20c639" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.500840 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.549982 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-internal-tls-certs\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.550032 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-public-tls-certs\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.550058 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-combined-ca-bundle\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.550096 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.550119 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-config\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.550147 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktrzc\" (UniqueName: \"kubernetes.io/projected/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-kube-api-access-ktrzc\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.550173 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-httpd-config\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.572144 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.575497 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-kube-api-access-ktrzc" (OuterVolumeSpecName: "kube-api-access-ktrzc") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "kube-api-access-ktrzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.621335 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.623757 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.628644 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.634065 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-config" (OuterVolumeSpecName: "config") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.651397 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.651890 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: W0221 07:10:21.652040 4820 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d/volumes/kubernetes.io~secret/ovndb-tls-certs Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652070 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652444 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652456 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652465 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652473 4820 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652481 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652490 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktrzc\" (UniqueName: \"kubernetes.io/projected/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-kube-api-access-ktrzc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652501 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:22 crc kubenswrapper[4820]: I0221 07:10:22.470871 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:10:22 crc kubenswrapper[4820]: I0221 07:10:22.491348 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7796b97765-sqvtc"] Feb 21 07:10:22 crc kubenswrapper[4820]: I0221 07:10:22.498099 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7796b97765-sqvtc"] Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.944107 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.944814 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.945423 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.945490 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.945450 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.947353 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.949301 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.949343 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:10:23 crc kubenswrapper[4820]: I0221 07:10:23.706091 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" path="/var/lib/kubelet/pods/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d/volumes" Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.941176 4820 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.941587 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data podName:5916b629-5e69-4ad3-9180-c07181d3ff37 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:59.941568918 +0000 UTC m=+1434.974653106 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data") pod "barbican-keystone-listener-7b6747758b-gs56z" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37") : secret "barbican-config-data" not found Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.944085 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.944426 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.944722 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.944748 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.945124 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.946302 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.947568 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.947607 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.527493 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.528051 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rwsk7_7880da24-89a6-4428-b9c1-5ffe6647af01/ovs-vswitchd/0.log" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.528954 4820 generic.go:334] "Generic (PLEG): container finished" podID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" exitCode=137 Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.529025 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rwsk7" event={"ID":"7880da24-89a6-4428-b9c1-5ffe6647af01","Type":"ContainerDied","Data":"d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6"} Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.537218 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e" exitCode=137 Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.537271 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e"} Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.537350 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.537383 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"0365054e0e1b957929429be30908085261342e98138a116476a25078e33fdc0f"} Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.537422 4820 scope.go:117] "RemoveContainer" containerID="4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.562890 4820 scope.go:117] "RemoveContainer" containerID="697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.591571 4820 scope.go:117] "RemoveContainer" containerID="adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.628523 4820 scope.go:117] "RemoveContainer" containerID="b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.649994 4820 scope.go:117] "RemoveContainer" containerID="15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.651918 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-cache\") pod \"b2200daa-1861-49f4-965a-68417ec65542\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.651960 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-lock\") pod \"b2200daa-1861-49f4-965a-68417ec65542\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.652005 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pmsc\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-kube-api-access-2pmsc\") pod \"b2200daa-1861-49f4-965a-68417ec65542\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.652088 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2200daa-1861-49f4-965a-68417ec65542-combined-ca-bundle\") pod \"b2200daa-1861-49f4-965a-68417ec65542\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.652130 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"b2200daa-1861-49f4-965a-68417ec65542\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.652231 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"b2200daa-1861-49f4-965a-68417ec65542\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.652683 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-cache" (OuterVolumeSpecName: "cache") pod "b2200daa-1861-49f4-965a-68417ec65542" (UID: "b2200daa-1861-49f4-965a-68417ec65542"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.652763 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-lock" (OuterVolumeSpecName: "lock") pod "b2200daa-1861-49f4-965a-68417ec65542" (UID: "b2200daa-1861-49f4-965a-68417ec65542"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.653564 4820 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-cache\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.653587 4820 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-lock\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.660882 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-kube-api-access-2pmsc" (OuterVolumeSpecName: "kube-api-access-2pmsc") pod "b2200daa-1861-49f4-965a-68417ec65542" (UID: "b2200daa-1861-49f4-965a-68417ec65542"). InnerVolumeSpecName "kube-api-access-2pmsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.660937 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b2200daa-1861-49f4-965a-68417ec65542" (UID: "b2200daa-1861-49f4-965a-68417ec65542"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.668635 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "b2200daa-1861-49f4-965a-68417ec65542" (UID: "b2200daa-1861-49f4-965a-68417ec65542"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.669581 4820 scope.go:117] "RemoveContainer" containerID="143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.695876 4820 scope.go:117] "RemoveContainer" containerID="1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.720164 4820 scope.go:117] "RemoveContainer" containerID="5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.739939 4820 scope.go:117] "RemoveContainer" containerID="c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.754552 4820 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.754591 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.754601 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pmsc\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-kube-api-access-2pmsc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.758182 4820 scope.go:117] "RemoveContainer" containerID="956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.767053 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.780269 4820 scope.go:117] "RemoveContainer" containerID="472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.797312 4820 scope.go:117] "RemoveContainer" containerID="3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.810864 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rwsk7_7880da24-89a6-4428-b9c1-5ffe6647af01/ovs-vswitchd/0.log" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.811827 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.812884 4820 scope.go:117] "RemoveContainer" containerID="6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.832555 4820 scope.go:117] "RemoveContainer" containerID="8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.853492 4820 scope.go:117] "RemoveContainer" containerID="ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.854926 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-etc-ovs\") pod \"7880da24-89a6-4428-b9c1-5ffe6647af01\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.854957 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-log\") pod \"7880da24-89a6-4428-b9c1-5ffe6647af01\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.855005 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-run\") pod \"7880da24-89a6-4428-b9c1-5ffe6647af01\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.855045 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-lib\") pod \"7880da24-89a6-4428-b9c1-5ffe6647af01\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.855072 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg85j\" (UniqueName: \"kubernetes.io/projected/7880da24-89a6-4428-b9c1-5ffe6647af01-kube-api-access-hg85j\") pod \"7880da24-89a6-4428-b9c1-5ffe6647af01\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.855149 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7880da24-89a6-4428-b9c1-5ffe6647af01-scripts\") pod \"7880da24-89a6-4428-b9c1-5ffe6647af01\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.855909 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.857232 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7880da24-89a6-4428-b9c1-5ffe6647af01-scripts" (OuterVolumeSpecName: "scripts") pod "7880da24-89a6-4428-b9c1-5ffe6647af01" (UID: "7880da24-89a6-4428-b9c1-5ffe6647af01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.857367 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "7880da24-89a6-4428-b9c1-5ffe6647af01" (UID: "7880da24-89a6-4428-b9c1-5ffe6647af01"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.857454 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-log" (OuterVolumeSpecName: "var-log") pod "7880da24-89a6-4428-b9c1-5ffe6647af01" (UID: "7880da24-89a6-4428-b9c1-5ffe6647af01"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.857592 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-run" (OuterVolumeSpecName: "var-run") pod "7880da24-89a6-4428-b9c1-5ffe6647af01" (UID: "7880da24-89a6-4428-b9c1-5ffe6647af01"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.857717 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-lib" (OuterVolumeSpecName: "var-lib") pod "7880da24-89a6-4428-b9c1-5ffe6647af01" (UID: "7880da24-89a6-4428-b9c1-5ffe6647af01"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.860656 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7880da24-89a6-4428-b9c1-5ffe6647af01-kube-api-access-hg85j" (OuterVolumeSpecName: "kube-api-access-hg85j") pod "7880da24-89a6-4428-b9c1-5ffe6647af01" (UID: "7880da24-89a6-4428-b9c1-5ffe6647af01"). InnerVolumeSpecName "kube-api-access-hg85j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.884447 4820 scope.go:117] "RemoveContainer" containerID="4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.884876 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e\": container with ID starting with 4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e not found: ID does not exist" containerID="4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.884905 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e"} err="failed to get container status \"4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e\": rpc error: code = NotFound desc = could not find container \"4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e\": container with ID starting with 4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.884924 4820 scope.go:117] "RemoveContainer" containerID="697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.885341 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf\": container with ID starting with 697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf not found: ID does not exist" containerID="697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.885405 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf"} err="failed to get container status \"697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf\": rpc error: code = NotFound desc = could not find container \"697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf\": container with ID starting with 697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.885439 4820 scope.go:117] "RemoveContainer" containerID="adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.886447 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895\": container with ID starting with adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895 not found: ID does not exist" containerID="adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.886487 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895"} err="failed to get container status \"adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895\": rpc error: code = NotFound desc = could not find container \"adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895\": container with ID starting with adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.886530 4820 scope.go:117] "RemoveContainer" containerID="b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.887439 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91\": container with ID starting with b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91 not found: ID does not exist" containerID="b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.887476 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91"} err="failed to get container status \"b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91\": rpc error: code = NotFound desc = could not find container \"b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91\": container with ID starting with b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.887497 4820 scope.go:117] "RemoveContainer" containerID="15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.887825 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02\": container with ID starting with 15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02 not found: ID does not exist" containerID="15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.887852 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02"} err="failed to get container status \"15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02\": rpc error: code = NotFound desc = could not find container \"15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02\": container with ID starting with 15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.887870 4820 scope.go:117] "RemoveContainer" containerID="143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.888133 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d\": container with ID starting with 143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d not found: ID does not exist" containerID="143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.888163 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d"} err="failed to get container status \"143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d\": rpc error: code = NotFound desc = could not find container \"143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d\": container with ID starting with 143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.888181 4820 scope.go:117] "RemoveContainer" containerID="1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.888536 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f\": container with ID starting with 1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f not found: ID does not exist" containerID="1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.888574 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f"} err="failed to get container status \"1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f\": rpc error: code = NotFound desc = could not find container \"1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f\": container with ID starting with 1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.888593 4820 scope.go:117] "RemoveContainer" containerID="5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.888823 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612\": container with ID starting with 5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612 not found: ID does not exist" containerID="5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.888846 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612"} err="failed to get container status \"5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612\": rpc error: code = NotFound desc = could not find container \"5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612\": container with ID starting with 5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.888859 4820 scope.go:117] "RemoveContainer" containerID="c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.889064 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713\": container with ID starting with c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713 not found: ID does not exist" containerID="c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889085 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713"} err="failed to get container status \"c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713\": rpc error: code = NotFound desc = could not find container \"c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713\": container with ID starting with c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889098 4820 scope.go:117] "RemoveContainer" containerID="956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.889325 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1\": container with ID starting with 956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1 not found: ID does not exist" containerID="956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889343 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1"} err="failed to get container status \"956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1\": rpc error: code = NotFound desc = could not find container \"956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1\": container with ID starting with 956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889355 4820 scope.go:117] "RemoveContainer" containerID="472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.889569 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7\": container with ID starting with 472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7 not found: ID does not exist" containerID="472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889586 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7"} err="failed to get container status \"472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7\": rpc error: code = NotFound desc = could not find container \"472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7\": container with ID starting with 472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889598 4820 scope.go:117] "RemoveContainer" containerID="3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.889774 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3\": container with ID starting with 3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3 not found: ID does not exist" containerID="3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889789 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3"} err="failed to get container status \"3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3\": rpc error: code = NotFound desc = could not find container \"3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3\": container with ID starting with 3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889800 4820 scope.go:117] "RemoveContainer" containerID="6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.890002 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53\": container with ID starting with 6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53 not found: ID does not exist" containerID="6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.890020 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53"} err="failed to get container status \"6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53\": rpc error: code = NotFound desc = could not find container \"6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53\": container with ID starting with 6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.890031 4820 scope.go:117] "RemoveContainer" containerID="8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.890258 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2\": container with ID starting with 8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2 not found: ID does not exist" containerID="8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.890276 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2"} err="failed to get container status \"8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2\": rpc error: code = NotFound desc = could not find container \"8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2\": container with ID starting with 8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.890292 4820 scope.go:117] "RemoveContainer" containerID="ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.890489 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80\": container with ID starting with ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80 not found: ID does not exist" containerID="ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.890507 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80"} err="failed to get container status \"ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80\": rpc error: code = NotFound desc = could not find container \"ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80\": container with ID starting with ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.933574 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2200daa-1861-49f4-965a-68417ec65542-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2200daa-1861-49f4-965a-68417ec65542" (UID: "b2200daa-1861-49f4-965a-68417ec65542"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.956990 4820 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-etc-ovs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.957020 4820 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-log\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.957030 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2200daa-1861-49f4-965a-68417ec65542-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.957040 4820 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.957048 4820 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-lib\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.957056 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg85j\" (UniqueName: \"kubernetes.io/projected/7880da24-89a6-4428-b9c1-5ffe6647af01-kube-api-access-hg85j\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.957066 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7880da24-89a6-4428-b9c1-5ffe6647af01-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.221921 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.227116 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.552149 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rwsk7_7880da24-89a6-4428-b9c1-5ffe6647af01/ovs-vswitchd/0.log" Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.553201 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.553211 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rwsk7" event={"ID":"7880da24-89a6-4428-b9c1-5ffe6647af01","Type":"ContainerDied","Data":"675fc4f5e2aff6c590607c714945d1b90c7e7d3a04e9fbfd0194ea4b92050e93"} Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.553305 4820 scope.go:117] "RemoveContainer" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.585347 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-rwsk7"] Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.588464 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-rwsk7"] Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.706443 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" path="/var/lib/kubelet/pods/7880da24-89a6-4428-b9c1-5ffe6647af01/volumes" Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.707483 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2200daa-1861-49f4-965a-68417ec65542" path="/var/lib/kubelet/pods/b2200daa-1861-49f4-965a-68417ec65542/volumes" Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.771379 4820 scope.go:117] "RemoveContainer" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.788228 4820 scope.go:117] "RemoveContainer" containerID="f0e8cd813e640fb93541738f45335efda88900c442e4f6521a72b6bc4a25130d" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.361993 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.414547 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61de836b-112e-4002-80c7-5ab77d4b9069-logs\") pod \"61de836b-112e-4002-80c7-5ab77d4b9069\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.414928 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-combined-ca-bundle\") pod \"61de836b-112e-4002-80c7-5ab77d4b9069\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.414958 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data\") pod \"61de836b-112e-4002-80c7-5ab77d4b9069\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.414989 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h77jl\" (UniqueName: \"kubernetes.io/projected/61de836b-112e-4002-80c7-5ab77d4b9069-kube-api-access-h77jl\") pod \"61de836b-112e-4002-80c7-5ab77d4b9069\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.415050 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data-custom\") pod \"61de836b-112e-4002-80c7-5ab77d4b9069\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.415167 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61de836b-112e-4002-80c7-5ab77d4b9069-logs" (OuterVolumeSpecName: "logs") pod "61de836b-112e-4002-80c7-5ab77d4b9069" (UID: "61de836b-112e-4002-80c7-5ab77d4b9069"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.415443 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61de836b-112e-4002-80c7-5ab77d4b9069-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.419792 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "61de836b-112e-4002-80c7-5ab77d4b9069" (UID: "61de836b-112e-4002-80c7-5ab77d4b9069"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.419990 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61de836b-112e-4002-80c7-5ab77d4b9069-kube-api-access-h77jl" (OuterVolumeSpecName: "kube-api-access-h77jl") pod "61de836b-112e-4002-80c7-5ab77d4b9069" (UID: "61de836b-112e-4002-80c7-5ab77d4b9069"). InnerVolumeSpecName "kube-api-access-h77jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.442458 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61de836b-112e-4002-80c7-5ab77d4b9069" (UID: "61de836b-112e-4002-80c7-5ab77d4b9069"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.450803 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data" (OuterVolumeSpecName: "config-data") pod "61de836b-112e-4002-80c7-5ab77d4b9069" (UID: "61de836b-112e-4002-80c7-5ab77d4b9069"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.516926 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.516971 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.516984 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h77jl\" (UniqueName: \"kubernetes.io/projected/61de836b-112e-4002-80c7-5ab77d4b9069-kube-api-access-h77jl\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.516998 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.583789 4820 generic.go:334] "Generic (PLEG): container finished" podID="61de836b-112e-4002-80c7-5ab77d4b9069" containerID="0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d" exitCode=137 Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.583838 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67dd4454fc-lr4lq" event={"ID":"61de836b-112e-4002-80c7-5ab77d4b9069","Type":"ContainerDied","Data":"0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d"} Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.583871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67dd4454fc-lr4lq" event={"ID":"61de836b-112e-4002-80c7-5ab77d4b9069","Type":"ContainerDied","Data":"d5ed25326b5133c99c08fd6d1fe6d320a4913920be2b2b8d47571a1f05ab484f"} Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.583872 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.583891 4820 scope.go:117] "RemoveContainer" containerID="0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.613159 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-67dd4454fc-lr4lq"] Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.613671 4820 scope.go:117] "RemoveContainer" containerID="50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.619175 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-67dd4454fc-lr4lq"] Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.630488 4820 scope.go:117] "RemoveContainer" containerID="0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d" Feb 21 07:10:32 crc kubenswrapper[4820]: E0221 07:10:32.630916 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d\": container with ID starting with 0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d not found: ID does not exist" containerID="0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.630955 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d"} err="failed to get container status \"0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d\": rpc error: code = NotFound desc = could not find container \"0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d\": container with ID starting with 0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d not found: ID does not exist" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.630986 4820 scope.go:117] "RemoveContainer" containerID="50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35" Feb 21 07:10:32 crc kubenswrapper[4820]: E0221 07:10:32.631204 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35\": container with ID starting with 50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35 not found: ID does not exist" containerID="50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.631229 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35"} err="failed to get container status \"50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35\": rpc error: code = NotFound desc = could not find container \"50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35\": container with ID starting with 50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35 not found: ID does not exist" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.374663 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.428965 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5916b629-5e69-4ad3-9180-c07181d3ff37-logs\") pod \"5916b629-5e69-4ad3-9180-c07181d3ff37\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.429037 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-combined-ca-bundle\") pod \"5916b629-5e69-4ad3-9180-c07181d3ff37\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.429121 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data\") pod \"5916b629-5e69-4ad3-9180-c07181d3ff37\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.429147 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r9h6\" (UniqueName: \"kubernetes.io/projected/5916b629-5e69-4ad3-9180-c07181d3ff37-kube-api-access-6r9h6\") pod \"5916b629-5e69-4ad3-9180-c07181d3ff37\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.429231 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data-custom\") pod \"5916b629-5e69-4ad3-9180-c07181d3ff37\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.429385 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5916b629-5e69-4ad3-9180-c07181d3ff37-logs" (OuterVolumeSpecName: "logs") pod "5916b629-5e69-4ad3-9180-c07181d3ff37" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.429694 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5916b629-5e69-4ad3-9180-c07181d3ff37-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.432437 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5916b629-5e69-4ad3-9180-c07181d3ff37" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.432961 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5916b629-5e69-4ad3-9180-c07181d3ff37-kube-api-access-6r9h6" (OuterVolumeSpecName: "kube-api-access-6r9h6") pod "5916b629-5e69-4ad3-9180-c07181d3ff37" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37"). InnerVolumeSpecName "kube-api-access-6r9h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.446472 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5916b629-5e69-4ad3-9180-c07181d3ff37" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.466148 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data" (OuterVolumeSpecName: "config-data") pod "5916b629-5e69-4ad3-9180-c07181d3ff37" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.531359 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r9h6\" (UniqueName: \"kubernetes.io/projected/5916b629-5e69-4ad3-9180-c07181d3ff37-kube-api-access-6r9h6\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.531392 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.531401 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.531409 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.596652 4820 generic.go:334] "Generic (PLEG): container finished" podID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerID="280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e" exitCode=137 Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.596696 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" event={"ID":"5916b629-5e69-4ad3-9180-c07181d3ff37","Type":"ContainerDied","Data":"280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e"} Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.596729 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" event={"ID":"5916b629-5e69-4ad3-9180-c07181d3ff37","Type":"ContainerDied","Data":"86fa03fcf82765a136a3aab82794955988ac327e55c1a34182d75c4632f7c8fc"} Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.596749 4820 scope.go:117] "RemoveContainer" containerID="280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.596867 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.622574 4820 scope.go:117] "RemoveContainer" containerID="5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.624919 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7b6747758b-gs56z"] Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.633936 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-7b6747758b-gs56z"] Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.641086 4820 scope.go:117] "RemoveContainer" containerID="280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e" Feb 21 07:10:33 crc kubenswrapper[4820]: E0221 07:10:33.641461 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e\": container with ID starting with 280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e not found: ID does not exist" containerID="280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.641497 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e"} err="failed to get container status \"280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e\": rpc error: code = NotFound desc = could not find container \"280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e\": container with ID starting with 280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e not found: ID does not exist" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.641521 4820 scope.go:117] "RemoveContainer" containerID="5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1" Feb 21 07:10:33 crc kubenswrapper[4820]: E0221 07:10:33.642079 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1\": container with ID starting with 5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1 not found: ID does not exist" containerID="5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.642103 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1"} err="failed to get container status \"5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1\": rpc error: code = NotFound desc = could not find container \"5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1\": container with ID starting with 5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1 not found: ID does not exist" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.703779 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" path="/var/lib/kubelet/pods/5916b629-5e69-4ad3-9180-c07181d3ff37/volumes" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.704378 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" path="/var/lib/kubelet/pods/61de836b-112e-4002-80c7-5ab77d4b9069/volumes" Feb 21 07:10:43 crc kubenswrapper[4820]: I0221 07:10:43.816779 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:10:43 crc kubenswrapper[4820]: I0221 07:10:43.818372 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:11:13 crc kubenswrapper[4820]: I0221 07:11:13.816927 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:11:13 crc kubenswrapper[4820]: I0221 07:11:13.817879 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.415384 4820 scope.go:117] "RemoveContainer" containerID="b05b0ffeced626b46e5a3d7acf041143c5dda7c4d6e96829cd77f955d68928e3" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.435507 4820 scope.go:117] "RemoveContainer" containerID="25ee57b0b664af1977c29401acb29880d1b373991571fe5848274a63a6cd3a3e" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.464917 4820 scope.go:117] "RemoveContainer" containerID="77ef8fafad5e6b7303c2ab29a54ec70cbb2ea080725bfabd09344c5407b83c16" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.484665 4820 scope.go:117] "RemoveContainer" containerID="3d73b26b5221cdf8b2f3495526d1e7baef6e58d18c45f1b76e76efd304e84f0f" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.507839 4820 scope.go:117] "RemoveContainer" containerID="b1e2e56563934ebad235ed2f0f20504c79930fcb47caf9e4bfbd0d1d3a55fe60" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.533511 4820 scope.go:117] "RemoveContainer" containerID="51679703ae2158b53bc0911e57a3e4d6e461f24e956bb1ea7408f2cb69b87ef1" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.551999 4820 scope.go:117] "RemoveContainer" containerID="ebca1bc305e6cb051db04835594d022509a4dd1726bfbffcfc0b2262d64b6ee2" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.566507 4820 scope.go:117] "RemoveContainer" containerID="3687cb41be17e324f8d8ae7287b8149bf97802e24e08623475454682c9f421e8" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.595398 4820 scope.go:117] "RemoveContainer" containerID="ff0159151c6f141c22cffbaa81dad0f0b8a12039ef73dc3cf246a84b8885a789" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.609538 4820 scope.go:117] "RemoveContainer" containerID="baa7cece2ce256578638bf4f6a5bc9638afee7fd94bd34c74a485d35c9ac1293" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.624709 4820 scope.go:117] "RemoveContainer" containerID="bc7f6f9a5d58d38241bb23918ec3d5506b30cfd767c5cd57651093052cf537b1" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.638267 4820 scope.go:117] "RemoveContainer" containerID="bccd056d3ccb7b521fe7131d2adc1ebf924abaee6a5315ab7005a0ebaf022fd8" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.652492 4820 scope.go:117] "RemoveContainer" containerID="6217a40428e0542093ddeccb7b2d5a7d3a0d949e486fb5723c5776887db5cdde" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.681087 4820 scope.go:117] "RemoveContainer" containerID="2bf9bc350dca95c1ab5b9b95e84478c10894bc91f944ad95cd208ed56c827df0" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.695198 4820 scope.go:117] "RemoveContainer" containerID="03c548c811acb4c242acaed906047e9cc39adbaca7c520712de29f84928072c8" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.710799 4820 scope.go:117] "RemoveContainer" containerID="a01c8152614e99c3561bbc5b953c4aa156aeb30d7be0dbf08d11fcbf1dfa7fff" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.727422 4820 scope.go:117] "RemoveContainer" containerID="4dd5abb92c8dda3b5eae940d15310c89c1fabe5b33b14d2a4979ab885abf315a" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.747525 4820 scope.go:117] "RemoveContainer" containerID="fefa9ef65a27a95fd0fbfd9f605222ae2b400c17ddf7734534b5e86974696a63" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.767217 4820 scope.go:117] "RemoveContainer" containerID="6e780104fae380320d0ded6249999a3a1b8e347ec62150e353a945acffed1e2c" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.784433 4820 scope.go:117] "RemoveContainer" containerID="9d429a4b3a6200dfae121b729b1359e79321fa7e7717f43e19aff11a7955b313" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.801784 4820 scope.go:117] "RemoveContainer" containerID="4d5fc8e1fa59379f7fa36b4bb94241f9192d59f0637e2f4694cd6d2809542488" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.403999 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g68lp"] Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404700 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404711 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404723 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ebfdb2-72a8-40c6-b0ed-012f138025b2" containerName="keystone-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404730 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ebfdb2-72a8-40c6-b0ed-012f138025b2" containerName="keystone-api" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404737 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="rsync" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404743 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="rsync" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404751 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404757 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404767 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404773 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404784 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404789 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404802 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404807 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-api" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404815 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404821 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404829 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="proxy-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404835 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="proxy-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404844 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404850 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404859 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f99a57a-608b-4678-9be5-abc4347c8bcb" containerName="memcached" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404864 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f99a57a-608b-4678-9be5-abc4347c8bcb" containerName="memcached" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404872 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404878 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404885 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="ovn-northd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404891 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="ovn-northd" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404900 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404905 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404912 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa49984a-9511-4449-adc6-997899961f73" containerName="setup-container" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404918 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa49984a-9511-4449-adc6-997899961f73" containerName="setup-container" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404929 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404934 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404943 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404949 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404959 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerName="setup-container" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404964 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerName="setup-container" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404974 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404979 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404988 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerName="rabbitmq" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404993 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerName="rabbitmq" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404999 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405005 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405011 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061bac4c-22ff-4144-b114-133ea89494c8" containerName="nova-cell1-conductor-conductor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405016 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="061bac4c-22ff-4144-b114-133ea89494c8" containerName="nova-cell1-conductor-conductor" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405024 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405031 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-server" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405040 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405046 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405056 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerName="mysql-bootstrap" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405061 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerName="mysql-bootstrap" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405071 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405076 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405084 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="openstack-network-exporter" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405090 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="openstack-network-exporter" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405099 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405106 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405114 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405120 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405127 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerName="mariadb-account-create-update" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405133 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerName="mariadb-account-create-update" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405139 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405145 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405155 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-notification-agent" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405161 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-notification-agent" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405167 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerName="mariadb-account-create-update" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405173 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerName="mariadb-account-create-update" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405181 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405187 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405195 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405200 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-server" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405208 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405213 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405220 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405225 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405247 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405254 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-api" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405261 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c841249-7293-4826-b05f-e4a189aaef07" containerName="nova-cell0-conductor-conductor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405266 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c841249-7293-4826-b05f-e4a189aaef07" containerName="nova-cell0-conductor-conductor" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405273 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405279 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405286 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-updater" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405291 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-updater" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405300 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-central-agent" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405306 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-central-agent" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405314 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-reaper" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405320 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-reaper" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405327 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-updater" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405333 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-updater" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405340 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="sg-core" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405345 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="sg-core" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405351 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca75969-e299-435a-a607-d470d4ab831e" containerName="nova-scheduler-scheduler" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405357 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca75969-e299-435a-a607-d470d4ab831e" containerName="nova-scheduler-scheduler" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405364 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server-init" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405370 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server-init" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405377 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405382 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405389 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405395 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-server" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405402 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405408 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405414 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405419 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405427 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405432 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405441 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="swift-recon-cron" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405446 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="swift-recon-cron" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405454 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb570ff-2a5e-4913-a84f-346579eaa104" containerName="kube-state-metrics" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405460 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb570ff-2a5e-4913-a84f-346579eaa104" containerName="kube-state-metrics" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405468 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405474 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405483 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405490 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405498 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-expirer" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405503 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-expirer" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405513 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa49984a-9511-4449-adc6-997899961f73" containerName="rabbitmq" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405519 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa49984a-9511-4449-adc6-997899961f73" containerName="rabbitmq" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405549 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405556 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405563 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405569 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405575 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405581 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-api" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405588 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerName="galera" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405594 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerName="galera" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405603 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-metadata" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405609 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-metadata" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405728 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405741 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405750 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405758 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405764 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405769 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405780 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-expirer" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405789 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405798 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405806 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa49984a-9511-4449-adc6-997899961f73" containerName="rabbitmq" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405814 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-updater" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405820 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405830 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405836 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="swift-recon-cron" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405845 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerName="galera" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405853 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="061bac4c-22ff-4144-b114-133ea89494c8" containerName="nova-cell1-conductor-conductor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405861 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-central-agent" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405867 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405874 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ca75969-e299-435a-a607-d470d4ab831e" containerName="nova-scheduler-scheduler" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405882 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405888 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405895 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerName="rabbitmq" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405905 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-updater" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405914 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405922 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c841249-7293-4826-b05f-e4a189aaef07" containerName="nova-cell0-conductor-conductor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405930 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405938 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405948 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerName="mariadb-account-create-update" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405955 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405961 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405967 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="ovn-northd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405974 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405979 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405988 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405995 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406003 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-reaper" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406012 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-metadata" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406019 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb570ff-2a5e-4913-a84f-346579eaa104" containerName="kube-state-metrics" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406024 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-notification-agent" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406031 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406038 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="openstack-network-exporter" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406047 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406053 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406061 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406068 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406076 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ebfdb2-72a8-40c6-b0ed-012f138025b2" containerName="keystone-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406085 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406091 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f99a57a-608b-4678-9be5-abc4347c8bcb" containerName="memcached" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406097 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="rsync" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406104 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406114 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406121 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406128 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406138 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="sg-core" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406147 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="proxy-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406154 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406160 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406168 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406174 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerName="mariadb-account-create-update" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.407087 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.418775 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g68lp"] Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.553112 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-utilities\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.553256 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-catalog-content\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.553310 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j4m7\" (UniqueName: \"kubernetes.io/projected/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-kube-api-access-9j4m7\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.654406 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j4m7\" (UniqueName: \"kubernetes.io/projected/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-kube-api-access-9j4m7\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.654466 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-utilities\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.654529 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-catalog-content\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.655051 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-catalog-content\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.655274 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-utilities\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.673309 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j4m7\" (UniqueName: \"kubernetes.io/projected/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-kube-api-access-9j4m7\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.726247 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:39 crc kubenswrapper[4820]: I0221 07:11:39.176418 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g68lp"] Feb 21 07:11:40 crc kubenswrapper[4820]: I0221 07:11:40.161974 4820 generic.go:334] "Generic (PLEG): container finished" podID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerID="a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd" exitCode=0 Feb 21 07:11:40 crc kubenswrapper[4820]: I0221 07:11:40.162071 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g68lp" event={"ID":"72ced9f9-32a3-46c4-b5a9-7c6d394bd164","Type":"ContainerDied","Data":"a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd"} Feb 21 07:11:40 crc kubenswrapper[4820]: I0221 07:11:40.170853 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g68lp" event={"ID":"72ced9f9-32a3-46c4-b5a9-7c6d394bd164","Type":"ContainerStarted","Data":"3cca4ef5882796f022029d444d82100b128dbefcd368d40ca8b6e76495cb4966"} Feb 21 07:11:41 crc kubenswrapper[4820]: I0221 07:11:41.181658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g68lp" event={"ID":"72ced9f9-32a3-46c4-b5a9-7c6d394bd164","Type":"ContainerStarted","Data":"eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429"} Feb 21 07:11:42 crc kubenswrapper[4820]: I0221 07:11:42.194231 4820 generic.go:334] "Generic (PLEG): container finished" podID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerID="eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429" exitCode=0 Feb 21 07:11:42 crc kubenswrapper[4820]: I0221 07:11:42.194333 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g68lp" event={"ID":"72ced9f9-32a3-46c4-b5a9-7c6d394bd164","Type":"ContainerDied","Data":"eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429"} Feb 21 07:11:43 crc kubenswrapper[4820]: I0221 07:11:43.202650 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g68lp" event={"ID":"72ced9f9-32a3-46c4-b5a9-7c6d394bd164","Type":"ContainerStarted","Data":"71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98"} Feb 21 07:11:43 crc kubenswrapper[4820]: I0221 07:11:43.223730 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g68lp" podStartSLOduration=2.82425382 podStartE2EDuration="5.223709911s" podCreationTimestamp="2026-02-21 07:11:38 +0000 UTC" firstStartedPulling="2026-02-21 07:11:40.164549423 +0000 UTC m=+1475.197633611" lastFinishedPulling="2026-02-21 07:11:42.564005464 +0000 UTC m=+1477.597089702" observedRunningTime="2026-02-21 07:11:43.219940018 +0000 UTC m=+1478.253024236" watchObservedRunningTime="2026-02-21 07:11:43.223709911 +0000 UTC m=+1478.256794109" Feb 21 07:11:43 crc kubenswrapper[4820]: I0221 07:11:43.816485 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:11:43 crc kubenswrapper[4820]: I0221 07:11:43.816791 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:11:43 crc kubenswrapper[4820]: I0221 07:11:43.816833 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:11:44 crc kubenswrapper[4820]: I0221 07:11:44.209002 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"382dbabbc108418e0159c4f962ec6351f7f55d31b6d9ca634247ee411e9ee6e0"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:11:44 crc kubenswrapper[4820]: I0221 07:11:44.209079 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://382dbabbc108418e0159c4f962ec6351f7f55d31b6d9ca634247ee411e9ee6e0" gracePeriod=600 Feb 21 07:11:45 crc kubenswrapper[4820]: I0221 07:11:45.218360 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="382dbabbc108418e0159c4f962ec6351f7f55d31b6d9ca634247ee411e9ee6e0" exitCode=0 Feb 21 07:11:45 crc kubenswrapper[4820]: I0221 07:11:45.218420 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"382dbabbc108418e0159c4f962ec6351f7f55d31b6d9ca634247ee411e9ee6e0"} Feb 21 07:11:45 crc kubenswrapper[4820]: I0221 07:11:45.219841 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4"} Feb 21 07:11:45 crc kubenswrapper[4820]: I0221 07:11:45.219868 4820 scope.go:117] "RemoveContainer" containerID="c99eabcd7cdc00f7af4fa074914b442d7ae5de65041a878335f0f81531e57443" Feb 21 07:11:48 crc kubenswrapper[4820]: I0221 07:11:48.726643 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:48 crc kubenswrapper[4820]: I0221 07:11:48.727391 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:48 crc kubenswrapper[4820]: I0221 07:11:48.768287 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:49 crc kubenswrapper[4820]: I0221 07:11:49.296475 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:49 crc kubenswrapper[4820]: I0221 07:11:49.339797 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g68lp"] Feb 21 07:11:51 crc kubenswrapper[4820]: I0221 07:11:51.273147 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g68lp" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="registry-server" containerID="cri-o://71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98" gracePeriod=2 Feb 21 07:11:52 crc kubenswrapper[4820]: I0221 07:11:52.934776 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.058860 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-utilities\") pod \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.058946 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-catalog-content\") pod \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.059031 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j4m7\" (UniqueName: \"kubernetes.io/projected/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-kube-api-access-9j4m7\") pod \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.060015 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-utilities" (OuterVolumeSpecName: "utilities") pod "72ced9f9-32a3-46c4-b5a9-7c6d394bd164" (UID: "72ced9f9-32a3-46c4-b5a9-7c6d394bd164"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.065138 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-kube-api-access-9j4m7" (OuterVolumeSpecName: "kube-api-access-9j4m7") pod "72ced9f9-32a3-46c4-b5a9-7c6d394bd164" (UID: "72ced9f9-32a3-46c4-b5a9-7c6d394bd164"). InnerVolumeSpecName "kube-api-access-9j4m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.160416 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j4m7\" (UniqueName: \"kubernetes.io/projected/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-kube-api-access-9j4m7\") on node \"crc\" DevicePath \"\"" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.160461 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.202517 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72ced9f9-32a3-46c4-b5a9-7c6d394bd164" (UID: "72ced9f9-32a3-46c4-b5a9-7c6d394bd164"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.262361 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.289568 4820 generic.go:334] "Generic (PLEG): container finished" podID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerID="71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98" exitCode=0 Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.289617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g68lp" event={"ID":"72ced9f9-32a3-46c4-b5a9-7c6d394bd164","Type":"ContainerDied","Data":"71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98"} Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.289638 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.289659 4820 scope.go:117] "RemoveContainer" containerID="71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.289646 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g68lp" event={"ID":"72ced9f9-32a3-46c4-b5a9-7c6d394bd164","Type":"ContainerDied","Data":"3cca4ef5882796f022029d444d82100b128dbefcd368d40ca8b6e76495cb4966"} Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.322329 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g68lp"] Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.325214 4820 scope.go:117] "RemoveContainer" containerID="eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.326158 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g68lp"] Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.346521 4820 scope.go:117] "RemoveContainer" containerID="a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.369327 4820 scope.go:117] "RemoveContainer" containerID="71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98" Feb 21 07:11:53 crc kubenswrapper[4820]: E0221 07:11:53.370035 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98\": container with ID starting with 71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98 not found: ID does not exist" containerID="71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.370073 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98"} err="failed to get container status \"71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98\": rpc error: code = NotFound desc = could not find container \"71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98\": container with ID starting with 71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98 not found: ID does not exist" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.370097 4820 scope.go:117] "RemoveContainer" containerID="eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429" Feb 21 07:11:53 crc kubenswrapper[4820]: E0221 07:11:53.370581 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429\": container with ID starting with eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429 not found: ID does not exist" containerID="eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.370615 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429"} err="failed to get container status \"eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429\": rpc error: code = NotFound desc = could not find container \"eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429\": container with ID starting with eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429 not found: ID does not exist" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.370643 4820 scope.go:117] "RemoveContainer" containerID="a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd" Feb 21 07:11:53 crc kubenswrapper[4820]: E0221 07:11:53.370947 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd\": container with ID starting with a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd not found: ID does not exist" containerID="a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.370971 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd"} err="failed to get container status \"a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd\": rpc error: code = NotFound desc = could not find container \"a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd\": container with ID starting with a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd not found: ID does not exist" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.713641 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" path="/var/lib/kubelet/pods/72ced9f9-32a3-46c4-b5a9-7c6d394bd164/volumes" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.118277 4820 scope.go:117] "RemoveContainer" containerID="52db6acc38ff2a23c299765955438b0540a4c5ba1d62d6356d26d0d4454620b3" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.163920 4820 scope.go:117] "RemoveContainer" containerID="d5d4ebfd3d862ab82dd24efdb0236db9cf326c55f3fab0e5ba28750a426c7f68" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.183513 4820 scope.go:117] "RemoveContainer" containerID="e889c593ed0d71d0bd8a837d661899903d747301909f78ed5da991ce6eccf229" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.212650 4820 scope.go:117] "RemoveContainer" containerID="550c85937cab1a43ffca5a3e6f730da87ca2ca354c9ca4640bf21a06db239cf3" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.242565 4820 scope.go:117] "RemoveContainer" containerID="ced644e0ce17e36b8fc26dcef8bef247a0ca698d43783b8feefdf41c4c74cc3d" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.265668 4820 scope.go:117] "RemoveContainer" containerID="902a90534639057fe4891bc5ba6d70d20ddb57a4bac2175eb285eb30ef1ad8ea" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.302221 4820 scope.go:117] "RemoveContainer" containerID="2906e8fbc9b8391ea1b9f7b50ccdd20d9a364edc7038390a746c5010002fe445" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.321072 4820 scope.go:117] "RemoveContainer" containerID="84344b3d5ae53a06ac9828132a33cafdbcfdeafdabeded21cd72b5eb2ec97792" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.337311 4820 scope.go:117] "RemoveContainer" containerID="1f8b1fb2f69da036c688f31fb3679ae1f19a1bae47b10780c72a6f4de62dcb8b" Feb 21 07:12:19 crc kubenswrapper[4820]: I0221 07:12:19.851565 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l2wrp"] Feb 21 07:12:19 crc kubenswrapper[4820]: E0221 07:12:19.852314 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="extract-utilities" Feb 21 07:12:19 crc kubenswrapper[4820]: I0221 07:12:19.852332 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="extract-utilities" Feb 21 07:12:19 crc kubenswrapper[4820]: E0221 07:12:19.852346 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="registry-server" Feb 21 07:12:19 crc kubenswrapper[4820]: I0221 07:12:19.852353 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="registry-server" Feb 21 07:12:19 crc kubenswrapper[4820]: E0221 07:12:19.852372 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="extract-content" Feb 21 07:12:19 crc kubenswrapper[4820]: I0221 07:12:19.852380 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="extract-content" Feb 21 07:12:19 crc kubenswrapper[4820]: I0221 07:12:19.852546 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="registry-server" Feb 21 07:12:19 crc kubenswrapper[4820]: I0221 07:12:19.853785 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:19 crc kubenswrapper[4820]: I0221 07:12:19.861097 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2wrp"] Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.016418 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-utilities\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.016671 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvnwj\" (UniqueName: \"kubernetes.io/projected/de999a72-1e7e-461a-a907-c24875dba879-kube-api-access-kvnwj\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.016797 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-catalog-content\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.117807 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-utilities\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.117918 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvnwj\" (UniqueName: \"kubernetes.io/projected/de999a72-1e7e-461a-a907-c24875dba879-kube-api-access-kvnwj\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.117969 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-catalog-content\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.118446 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-utilities\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.118472 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-catalog-content\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.138466 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvnwj\" (UniqueName: \"kubernetes.io/projected/de999a72-1e7e-461a-a907-c24875dba879-kube-api-access-kvnwj\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.175044 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.616391 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2wrp"] Feb 21 07:12:21 crc kubenswrapper[4820]: I0221 07:12:21.499273 4820 generic.go:334] "Generic (PLEG): container finished" podID="de999a72-1e7e-461a-a907-c24875dba879" containerID="f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30" exitCode=0 Feb 21 07:12:21 crc kubenswrapper[4820]: I0221 07:12:21.499556 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wrp" event={"ID":"de999a72-1e7e-461a-a907-c24875dba879","Type":"ContainerDied","Data":"f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30"} Feb 21 07:12:21 crc kubenswrapper[4820]: I0221 07:12:21.499585 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wrp" event={"ID":"de999a72-1e7e-461a-a907-c24875dba879","Type":"ContainerStarted","Data":"53fc57866f63f70098d655c5a5614087b69ee94673e7a6a34fd55d921072b114"} Feb 21 07:12:22 crc kubenswrapper[4820]: I0221 07:12:22.511376 4820 generic.go:334] "Generic (PLEG): container finished" podID="de999a72-1e7e-461a-a907-c24875dba879" containerID="4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb" exitCode=0 Feb 21 07:12:22 crc kubenswrapper[4820]: I0221 07:12:22.511476 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wrp" event={"ID":"de999a72-1e7e-461a-a907-c24875dba879","Type":"ContainerDied","Data":"4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb"} Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.348466 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zxhsj"] Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.350287 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.363973 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zxhsj"] Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.469854 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-utilities\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.469978 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-catalog-content\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.470015 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28h2t\" (UniqueName: \"kubernetes.io/projected/e1a58f68-a763-4319-a105-a195c741011f-kube-api-access-28h2t\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.520755 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wrp" event={"ID":"de999a72-1e7e-461a-a907-c24875dba879","Type":"ContainerStarted","Data":"fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf"} Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.540715 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l2wrp" podStartSLOduration=3.115931908 podStartE2EDuration="4.540694678s" podCreationTimestamp="2026-02-21 07:12:19 +0000 UTC" firstStartedPulling="2026-02-21 07:12:21.501057704 +0000 UTC m=+1516.534141902" lastFinishedPulling="2026-02-21 07:12:22.925820474 +0000 UTC m=+1517.958904672" observedRunningTime="2026-02-21 07:12:23.535863096 +0000 UTC m=+1518.568947294" watchObservedRunningTime="2026-02-21 07:12:23.540694678 +0000 UTC m=+1518.573778866" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.571034 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-catalog-content\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.571092 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28h2t\" (UniqueName: \"kubernetes.io/projected/e1a58f68-a763-4319-a105-a195c741011f-kube-api-access-28h2t\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.571143 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-utilities\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.571594 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-catalog-content\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.571618 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-utilities\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.590771 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28h2t\" (UniqueName: \"kubernetes.io/projected/e1a58f68-a763-4319-a105-a195c741011f-kube-api-access-28h2t\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.669263 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:24 crc kubenswrapper[4820]: I0221 07:12:24.118033 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zxhsj"] Feb 21 07:12:24 crc kubenswrapper[4820]: W0221 07:12:24.121620 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1a58f68_a763_4319_a105_a195c741011f.slice/crio-8fa1d44bbe904a390c5f31829c12242548b91bd0d050fccae0ca69068d71e9ac WatchSource:0}: Error finding container 8fa1d44bbe904a390c5f31829c12242548b91bd0d050fccae0ca69068d71e9ac: Status 404 returned error can't find the container with id 8fa1d44bbe904a390c5f31829c12242548b91bd0d050fccae0ca69068d71e9ac Feb 21 07:12:24 crc kubenswrapper[4820]: I0221 07:12:24.530348 4820 generic.go:334] "Generic (PLEG): container finished" podID="e1a58f68-a763-4319-a105-a195c741011f" containerID="e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11" exitCode=0 Feb 21 07:12:24 crc kubenswrapper[4820]: I0221 07:12:24.530409 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxhsj" event={"ID":"e1a58f68-a763-4319-a105-a195c741011f","Type":"ContainerDied","Data":"e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11"} Feb 21 07:12:24 crc kubenswrapper[4820]: I0221 07:12:24.530459 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxhsj" event={"ID":"e1a58f68-a763-4319-a105-a195c741011f","Type":"ContainerStarted","Data":"8fa1d44bbe904a390c5f31829c12242548b91bd0d050fccae0ca69068d71e9ac"} Feb 21 07:12:25 crc kubenswrapper[4820]: I0221 07:12:25.562176 4820 generic.go:334] "Generic (PLEG): container finished" podID="e1a58f68-a763-4319-a105-a195c741011f" containerID="65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92" exitCode=0 Feb 21 07:12:25 crc kubenswrapper[4820]: I0221 07:12:25.562457 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxhsj" event={"ID":"e1a58f68-a763-4319-a105-a195c741011f","Type":"ContainerDied","Data":"65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92"} Feb 21 07:12:26 crc kubenswrapper[4820]: I0221 07:12:26.572569 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxhsj" event={"ID":"e1a58f68-a763-4319-a105-a195c741011f","Type":"ContainerStarted","Data":"6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3"} Feb 21 07:12:26 crc kubenswrapper[4820]: I0221 07:12:26.601933 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zxhsj" podStartSLOduration=2.172310301 podStartE2EDuration="3.601918152s" podCreationTimestamp="2026-02-21 07:12:23 +0000 UTC" firstStartedPulling="2026-02-21 07:12:24.532013953 +0000 UTC m=+1519.565098151" lastFinishedPulling="2026-02-21 07:12:25.961621804 +0000 UTC m=+1520.994706002" observedRunningTime="2026-02-21 07:12:26.599315571 +0000 UTC m=+1521.632399869" watchObservedRunningTime="2026-02-21 07:12:26.601918152 +0000 UTC m=+1521.635002340" Feb 21 07:12:30 crc kubenswrapper[4820]: I0221 07:12:30.175204 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:30 crc kubenswrapper[4820]: I0221 07:12:30.175690 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:30 crc kubenswrapper[4820]: I0221 07:12:30.221634 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:30 crc kubenswrapper[4820]: I0221 07:12:30.658224 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:30 crc kubenswrapper[4820]: I0221 07:12:30.722990 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l2wrp"] Feb 21 07:12:32 crc kubenswrapper[4820]: I0221 07:12:32.613045 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l2wrp" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="registry-server" containerID="cri-o://fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf" gracePeriod=2 Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.053169 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.207866 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-catalog-content\") pod \"de999a72-1e7e-461a-a907-c24875dba879\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.208184 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-utilities\") pod \"de999a72-1e7e-461a-a907-c24875dba879\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.208212 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvnwj\" (UniqueName: \"kubernetes.io/projected/de999a72-1e7e-461a-a907-c24875dba879-kube-api-access-kvnwj\") pod \"de999a72-1e7e-461a-a907-c24875dba879\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.209056 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-utilities" (OuterVolumeSpecName: "utilities") pod "de999a72-1e7e-461a-a907-c24875dba879" (UID: "de999a72-1e7e-461a-a907-c24875dba879"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.214426 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de999a72-1e7e-461a-a907-c24875dba879-kube-api-access-kvnwj" (OuterVolumeSpecName: "kube-api-access-kvnwj") pod "de999a72-1e7e-461a-a907-c24875dba879" (UID: "de999a72-1e7e-461a-a907-c24875dba879"). InnerVolumeSpecName "kube-api-access-kvnwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.311731 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.311769 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvnwj\" (UniqueName: \"kubernetes.io/projected/de999a72-1e7e-461a-a907-c24875dba879-kube-api-access-kvnwj\") on node \"crc\" DevicePath \"\"" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.443276 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de999a72-1e7e-461a-a907-c24875dba879" (UID: "de999a72-1e7e-461a-a907-c24875dba879"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.514500 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.622719 4820 generic.go:334] "Generic (PLEG): container finished" podID="de999a72-1e7e-461a-a907-c24875dba879" containerID="fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf" exitCode=0 Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.622762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wrp" event={"ID":"de999a72-1e7e-461a-a907-c24875dba879","Type":"ContainerDied","Data":"fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf"} Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.622789 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wrp" event={"ID":"de999a72-1e7e-461a-a907-c24875dba879","Type":"ContainerDied","Data":"53fc57866f63f70098d655c5a5614087b69ee94673e7a6a34fd55d921072b114"} Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.622806 4820 scope.go:117] "RemoveContainer" containerID="fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.622834 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.642741 4820 scope.go:117] "RemoveContainer" containerID="4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.667341 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l2wrp"] Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.668534 4820 scope.go:117] "RemoveContainer" containerID="f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.670852 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.670945 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.683158 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l2wrp"] Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.694664 4820 scope.go:117] "RemoveContainer" containerID="fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf" Feb 21 07:12:33 crc kubenswrapper[4820]: E0221 07:12:33.695120 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf\": container with ID starting with fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf not found: ID does not exist" containerID="fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.695152 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf"} err="failed to get container status \"fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf\": rpc error: code = NotFound desc = could not find container \"fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf\": container with ID starting with fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf not found: ID does not exist" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.695172 4820 scope.go:117] "RemoveContainer" containerID="4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb" Feb 21 07:12:33 crc kubenswrapper[4820]: E0221 07:12:33.695664 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb\": container with ID starting with 4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb not found: ID does not exist" containerID="4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.695795 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb"} err="failed to get container status \"4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb\": rpc error: code = NotFound desc = could not find container \"4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb\": container with ID starting with 4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb not found: ID does not exist" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.695890 4820 scope.go:117] "RemoveContainer" containerID="f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30" Feb 21 07:12:33 crc kubenswrapper[4820]: E0221 07:12:33.697723 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30\": container with ID starting with f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30 not found: ID does not exist" containerID="f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.697802 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30"} err="failed to get container status \"f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30\": rpc error: code = NotFound desc = could not find container \"f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30\": container with ID starting with f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30 not found: ID does not exist" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.705280 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de999a72-1e7e-461a-a907-c24875dba879" path="/var/lib/kubelet/pods/de999a72-1e7e-461a-a907-c24875dba879/volumes" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.736817 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:34 crc kubenswrapper[4820]: I0221 07:12:34.679551 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:35 crc kubenswrapper[4820]: I0221 07:12:35.851525 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zxhsj"] Feb 21 07:12:37 crc kubenswrapper[4820]: I0221 07:12:37.658532 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zxhsj" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="registry-server" containerID="cri-o://6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3" gracePeriod=2 Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.069455 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.184085 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-catalog-content\") pod \"e1a58f68-a763-4319-a105-a195c741011f\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.184185 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28h2t\" (UniqueName: \"kubernetes.io/projected/e1a58f68-a763-4319-a105-a195c741011f-kube-api-access-28h2t\") pod \"e1a58f68-a763-4319-a105-a195c741011f\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.184262 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-utilities\") pod \"e1a58f68-a763-4319-a105-a195c741011f\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.185732 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-utilities" (OuterVolumeSpecName: "utilities") pod "e1a58f68-a763-4319-a105-a195c741011f" (UID: "e1a58f68-a763-4319-a105-a195c741011f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.186133 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.194743 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a58f68-a763-4319-a105-a195c741011f-kube-api-access-28h2t" (OuterVolumeSpecName: "kube-api-access-28h2t") pod "e1a58f68-a763-4319-a105-a195c741011f" (UID: "e1a58f68-a763-4319-a105-a195c741011f"). InnerVolumeSpecName "kube-api-access-28h2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.213372 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1a58f68-a763-4319-a105-a195c741011f" (UID: "e1a58f68-a763-4319-a105-a195c741011f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.287760 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28h2t\" (UniqueName: \"kubernetes.io/projected/e1a58f68-a763-4319-a105-a195c741011f-kube-api-access-28h2t\") on node \"crc\" DevicePath \"\"" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.287793 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.669819 4820 generic.go:334] "Generic (PLEG): container finished" podID="e1a58f68-a763-4319-a105-a195c741011f" containerID="6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3" exitCode=0 Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.669877 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.669893 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxhsj" event={"ID":"e1a58f68-a763-4319-a105-a195c741011f","Type":"ContainerDied","Data":"6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3"} Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.670306 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxhsj" event={"ID":"e1a58f68-a763-4319-a105-a195c741011f","Type":"ContainerDied","Data":"8fa1d44bbe904a390c5f31829c12242548b91bd0d050fccae0ca69068d71e9ac"} Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.670347 4820 scope.go:117] "RemoveContainer" containerID="6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.688957 4820 scope.go:117] "RemoveContainer" containerID="65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.704797 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zxhsj"] Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.712349 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zxhsj"] Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.724252 4820 scope.go:117] "RemoveContainer" containerID="e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.742795 4820 scope.go:117] "RemoveContainer" containerID="6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3" Feb 21 07:12:38 crc kubenswrapper[4820]: E0221 07:12:38.743199 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3\": container with ID starting with 6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3 not found: ID does not exist" containerID="6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.743314 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3"} err="failed to get container status \"6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3\": rpc error: code = NotFound desc = could not find container \"6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3\": container with ID starting with 6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3 not found: ID does not exist" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.743397 4820 scope.go:117] "RemoveContainer" containerID="65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92" Feb 21 07:12:38 crc kubenswrapper[4820]: E0221 07:12:38.743714 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92\": container with ID starting with 65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92 not found: ID does not exist" containerID="65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.743794 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92"} err="failed to get container status \"65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92\": rpc error: code = NotFound desc = could not find container \"65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92\": container with ID starting with 65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92 not found: ID does not exist" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.743858 4820 scope.go:117] "RemoveContainer" containerID="e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11" Feb 21 07:12:38 crc kubenswrapper[4820]: E0221 07:12:38.744132 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11\": container with ID starting with e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11 not found: ID does not exist" containerID="e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.744202 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11"} err="failed to get container status \"e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11\": rpc error: code = NotFound desc = could not find container \"e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11\": container with ID starting with e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11 not found: ID does not exist" Feb 21 07:12:39 crc kubenswrapper[4820]: I0221 07:12:39.719785 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a58f68-a763-4319-a105-a195c741011f" path="/var/lib/kubelet/pods/e1a58f68-a763-4319-a105-a195c741011f/volumes" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.271489 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z8459"] Feb 21 07:13:15 crc kubenswrapper[4820]: E0221 07:13:15.272423 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="registry-server" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272441 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="registry-server" Feb 21 07:13:15 crc kubenswrapper[4820]: E0221 07:13:15.272457 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="registry-server" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272465 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="registry-server" Feb 21 07:13:15 crc kubenswrapper[4820]: E0221 07:13:15.272480 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="extract-content" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272488 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="extract-content" Feb 21 07:13:15 crc kubenswrapper[4820]: E0221 07:13:15.272508 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="extract-utilities" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272596 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="extract-utilities" Feb 21 07:13:15 crc kubenswrapper[4820]: E0221 07:13:15.272612 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="extract-utilities" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272622 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="extract-utilities" Feb 21 07:13:15 crc kubenswrapper[4820]: E0221 07:13:15.272636 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="extract-content" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272643 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="extract-content" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272795 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="registry-server" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272810 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="registry-server" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.273707 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.282984 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8459"] Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.416466 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-catalog-content\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.416567 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4878\" (UniqueName: \"kubernetes.io/projected/5f95139f-3378-4e78-b252-d5c8675b569d-kube-api-access-t4878\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.416602 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-utilities\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.492693 4820 scope.go:117] "RemoveContainer" containerID="54118e9818d7276160841e63d567ac3e54c21ac7cf2b86b070a7bea2245976ec" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.518232 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-catalog-content\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.518638 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4878\" (UniqueName: \"kubernetes.io/projected/5f95139f-3378-4e78-b252-d5c8675b569d-kube-api-access-t4878\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.518686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-utilities\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.518890 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-catalog-content\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.519212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-utilities\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.528718 4820 scope.go:117] "RemoveContainer" containerID="f3324889fec35626b75b20c53e1108c5e3bcfec60c0afc870568283a3900d80f" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.542430 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4878\" (UniqueName: \"kubernetes.io/projected/5f95139f-3378-4e78-b252-d5c8675b569d-kube-api-access-t4878\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.582293 4820 scope.go:117] "RemoveContainer" containerID="0bec83aee0f9a29a60415108651d81b24d0de435829325f2cb93c8d2a1d9ae61" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.604468 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.604756 4820 scope.go:117] "RemoveContainer" containerID="8de9677e20a8b782d2bcecb9fa76424556258bd3e583a5de8910cd040771e0ad" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.653449 4820 scope.go:117] "RemoveContainer" containerID="fdbb90e329836ac7456cf06344114203e75f7f1a57280874e8b064833b913f8e" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.686558 4820 scope.go:117] "RemoveContainer" containerID="eafd72d9e7eb9455c63fe46ce3b813c939d82e75512da868bf318e1592ef0443" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.730090 4820 scope.go:117] "RemoveContainer" containerID="c6eec58d937060e917865b55d6939557fd730b3dc3294db9f26e433da11bcf3a" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.758153 4820 scope.go:117] "RemoveContainer" containerID="bae2eaf1b1365374df39b8e13452ae986ea6ebeb55baae9a5ee7d5811ab1d647" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.785387 4820 scope.go:117] "RemoveContainer" containerID="2888304fe149a4652cef0ecaece438bfd7d58f18a6fbf5e65f2e3c959991183b" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.822601 4820 scope.go:117] "RemoveContainer" containerID="826aef72e76fbab81ee8a9700d6ed1f07cc109d2629349f71b59a9573befe3d1" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.854916 4820 scope.go:117] "RemoveContainer" containerID="cbde025c9fa7d22d168b54e6b8a411d4937140bd66d43a2f8ef9982aa91aa117" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.889141 4820 scope.go:117] "RemoveContainer" containerID="c89955e8456635f9567d07ebef7a2fae175b713a07f50ea3684f6959998a79da" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.917581 4820 scope.go:117] "RemoveContainer" containerID="89a677ab22f4bcd7551d19abb1edd151c1367901214a3d624d55bc1c5a3aa903" Feb 21 07:13:16 crc kubenswrapper[4820]: I0221 07:13:16.095501 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8459"] Feb 21 07:13:16 crc kubenswrapper[4820]: I0221 07:13:16.983218 4820 generic.go:334] "Generic (PLEG): container finished" podID="5f95139f-3378-4e78-b252-d5c8675b569d" containerID="ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf" exitCode=0 Feb 21 07:13:16 crc kubenswrapper[4820]: I0221 07:13:16.983284 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8459" event={"ID":"5f95139f-3378-4e78-b252-d5c8675b569d","Type":"ContainerDied","Data":"ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf"} Feb 21 07:13:16 crc kubenswrapper[4820]: I0221 07:13:16.983762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8459" event={"ID":"5f95139f-3378-4e78-b252-d5c8675b569d","Type":"ContainerStarted","Data":"8d55eafc614b0a6bbfb5f893449a921ca315613e72a61449dede6af0b0e34777"} Feb 21 07:13:19 crc kubenswrapper[4820]: I0221 07:13:19.002660 4820 generic.go:334] "Generic (PLEG): container finished" podID="5f95139f-3378-4e78-b252-d5c8675b569d" containerID="e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12" exitCode=0 Feb 21 07:13:19 crc kubenswrapper[4820]: I0221 07:13:19.003144 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8459" event={"ID":"5f95139f-3378-4e78-b252-d5c8675b569d","Type":"ContainerDied","Data":"e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12"} Feb 21 07:13:20 crc kubenswrapper[4820]: I0221 07:13:20.013702 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8459" event={"ID":"5f95139f-3378-4e78-b252-d5c8675b569d","Type":"ContainerStarted","Data":"e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550"} Feb 21 07:13:20 crc kubenswrapper[4820]: I0221 07:13:20.038637 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z8459" podStartSLOduration=2.580394691 podStartE2EDuration="5.038610543s" podCreationTimestamp="2026-02-21 07:13:15 +0000 UTC" firstStartedPulling="2026-02-21 07:13:16.985692176 +0000 UTC m=+1572.018776384" lastFinishedPulling="2026-02-21 07:13:19.443908038 +0000 UTC m=+1574.476992236" observedRunningTime="2026-02-21 07:13:20.033568296 +0000 UTC m=+1575.066652524" watchObservedRunningTime="2026-02-21 07:13:20.038610543 +0000 UTC m=+1575.071694781" Feb 21 07:13:25 crc kubenswrapper[4820]: I0221 07:13:25.605589 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:25 crc kubenswrapper[4820]: I0221 07:13:25.606351 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:25 crc kubenswrapper[4820]: I0221 07:13:25.657183 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:26 crc kubenswrapper[4820]: I0221 07:13:26.089201 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:26 crc kubenswrapper[4820]: I0221 07:13:26.137082 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8459"] Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.064800 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z8459" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="registry-server" containerID="cri-o://e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550" gracePeriod=2 Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.478940 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.613732 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4878\" (UniqueName: \"kubernetes.io/projected/5f95139f-3378-4e78-b252-d5c8675b569d-kube-api-access-t4878\") pod \"5f95139f-3378-4e78-b252-d5c8675b569d\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.613780 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-utilities\") pod \"5f95139f-3378-4e78-b252-d5c8675b569d\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.613876 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-catalog-content\") pod \"5f95139f-3378-4e78-b252-d5c8675b569d\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.615102 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-utilities" (OuterVolumeSpecName: "utilities") pod "5f95139f-3378-4e78-b252-d5c8675b569d" (UID: "5f95139f-3378-4e78-b252-d5c8675b569d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.623734 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f95139f-3378-4e78-b252-d5c8675b569d-kube-api-access-t4878" (OuterVolumeSpecName: "kube-api-access-t4878") pod "5f95139f-3378-4e78-b252-d5c8675b569d" (UID: "5f95139f-3378-4e78-b252-d5c8675b569d"). InnerVolumeSpecName "kube-api-access-t4878". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.715928 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4878\" (UniqueName: \"kubernetes.io/projected/5f95139f-3378-4e78-b252-d5c8675b569d-kube-api-access-t4878\") on node \"crc\" DevicePath \"\"" Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.715987 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.081400 4820 generic.go:334] "Generic (PLEG): container finished" podID="5f95139f-3378-4e78-b252-d5c8675b569d" containerID="e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550" exitCode=0 Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.081462 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8459" event={"ID":"5f95139f-3378-4e78-b252-d5c8675b569d","Type":"ContainerDied","Data":"e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550"} Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.081475 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.081502 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8459" event={"ID":"5f95139f-3378-4e78-b252-d5c8675b569d","Type":"ContainerDied","Data":"8d55eafc614b0a6bbfb5f893449a921ca315613e72a61449dede6af0b0e34777"} Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.081535 4820 scope.go:117] "RemoveContainer" containerID="e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.110785 4820 scope.go:117] "RemoveContainer" containerID="e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.153021 4820 scope.go:117] "RemoveContainer" containerID="ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.159797 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f95139f-3378-4e78-b252-d5c8675b569d" (UID: "5f95139f-3378-4e78-b252-d5c8675b569d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.172268 4820 scope.go:117] "RemoveContainer" containerID="e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550" Feb 21 07:13:29 crc kubenswrapper[4820]: E0221 07:13:29.172736 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550\": container with ID starting with e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550 not found: ID does not exist" containerID="e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.172771 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550"} err="failed to get container status \"e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550\": rpc error: code = NotFound desc = could not find container \"e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550\": container with ID starting with e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550 not found: ID does not exist" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.172793 4820 scope.go:117] "RemoveContainer" containerID="e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12" Feb 21 07:13:29 crc kubenswrapper[4820]: E0221 07:13:29.173017 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12\": container with ID starting with e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12 not found: ID does not exist" containerID="e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.173064 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12"} err="failed to get container status \"e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12\": rpc error: code = NotFound desc = could not find container \"e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12\": container with ID starting with e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12 not found: ID does not exist" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.173095 4820 scope.go:117] "RemoveContainer" containerID="ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf" Feb 21 07:13:29 crc kubenswrapper[4820]: E0221 07:13:29.173413 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf\": container with ID starting with ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf not found: ID does not exist" containerID="ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.173437 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf"} err="failed to get container status \"ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf\": rpc error: code = NotFound desc = could not find container \"ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf\": container with ID starting with ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf not found: ID does not exist" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.228349 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.426177 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8459"] Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.432812 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z8459"] Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.708848 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" path="/var/lib/kubelet/pods/5f95139f-3378-4e78-b252-d5c8675b569d/volumes" Feb 21 07:14:13 crc kubenswrapper[4820]: I0221 07:14:13.816841 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:14:13 crc kubenswrapper[4820]: I0221 07:14:13.817389 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:14:16 crc kubenswrapper[4820]: I0221 07:14:16.111073 4820 scope.go:117] "RemoveContainer" containerID="24941eaa5fcba668b44518933915d73aa568096044e3c4ed1b1d3b36fe63bafd" Feb 21 07:14:16 crc kubenswrapper[4820]: I0221 07:14:16.143492 4820 scope.go:117] "RemoveContainer" containerID="41d8a8ccd5e19ac57e720c85ad185f48f7da5235d29f9404d9f0a52202561714" Feb 21 07:14:16 crc kubenswrapper[4820]: I0221 07:14:16.192187 4820 scope.go:117] "RemoveContainer" containerID="8f1053354930657be13a47d1867923e155692b07e230c8c0cef421265cc3f890" Feb 21 07:14:16 crc kubenswrapper[4820]: I0221 07:14:16.217480 4820 scope.go:117] "RemoveContainer" containerID="498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae" Feb 21 07:14:16 crc kubenswrapper[4820]: I0221 07:14:16.238209 4820 scope.go:117] "RemoveContainer" containerID="9f7f20d400dd7826ec45e2cb589dc07ed34aae16fbcb9165c10870bcc6f36e39" Feb 21 07:14:16 crc kubenswrapper[4820]: I0221 07:14:16.253838 4820 scope.go:117] "RemoveContainer" containerID="e51a0c40d4d4f93896ed1ad8bb07fb842ed12a2ac2a6f114e30bfa929e0c2882" Feb 21 07:14:16 crc kubenswrapper[4820]: I0221 07:14:16.291053 4820 scope.go:117] "RemoveContainer" containerID="f7fd77b014ee72eca0be4a4c777ce16b6927f8e4f122356935b98249924cfad2" Feb 21 07:14:43 crc kubenswrapper[4820]: I0221 07:14:43.816434 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:14:43 crc kubenswrapper[4820]: I0221 07:14:43.817164 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.144661 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt"] Feb 21 07:15:00 crc kubenswrapper[4820]: E0221 07:15:00.145815 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="extract-utilities" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.145835 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="extract-utilities" Feb 21 07:15:00 crc kubenswrapper[4820]: E0221 07:15:00.145848 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="extract-content" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.145856 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="extract-content" Feb 21 07:15:00 crc kubenswrapper[4820]: E0221 07:15:00.145881 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="registry-server" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.145889 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="registry-server" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.146052 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="registry-server" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.146581 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.148454 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.148455 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.156369 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt"] Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.157484 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebbbeb29-093d-424c-aa21-a711f564f201-config-volume\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.157630 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebbbeb29-093d-424c-aa21-a711f564f201-secret-volume\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.157779 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl9wp\" (UniqueName: \"kubernetes.io/projected/ebbbeb29-093d-424c-aa21-a711f564f201-kube-api-access-wl9wp\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.259125 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebbbeb29-093d-424c-aa21-a711f564f201-config-volume\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.259180 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebbbeb29-093d-424c-aa21-a711f564f201-secret-volume\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.259227 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl9wp\" (UniqueName: \"kubernetes.io/projected/ebbbeb29-093d-424c-aa21-a711f564f201-kube-api-access-wl9wp\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.260398 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebbbeb29-093d-424c-aa21-a711f564f201-config-volume\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.266532 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebbbeb29-093d-424c-aa21-a711f564f201-secret-volume\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.275961 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl9wp\" (UniqueName: \"kubernetes.io/projected/ebbbeb29-093d-424c-aa21-a711f564f201-kube-api-access-wl9wp\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.467233 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.876365 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt"] Feb 21 07:15:01 crc kubenswrapper[4820]: I0221 07:15:01.771496 4820 generic.go:334] "Generic (PLEG): container finished" podID="ebbbeb29-093d-424c-aa21-a711f564f201" containerID="a723e81e08af1fbe61c3aa1a83712ca47314287f719a875048e1f08fe12358d0" exitCode=0 Feb 21 07:15:01 crc kubenswrapper[4820]: I0221 07:15:01.771600 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" event={"ID":"ebbbeb29-093d-424c-aa21-a711f564f201","Type":"ContainerDied","Data":"a723e81e08af1fbe61c3aa1a83712ca47314287f719a875048e1f08fe12358d0"} Feb 21 07:15:01 crc kubenswrapper[4820]: I0221 07:15:01.771662 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" event={"ID":"ebbbeb29-093d-424c-aa21-a711f564f201","Type":"ContainerStarted","Data":"91b11a148d927e25a5d57756e195a1d73d78980db20620c1818237ad4e45751f"} Feb 21 07:15:02 crc kubenswrapper[4820]: I0221 07:15:02.993037 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.099755 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebbbeb29-093d-424c-aa21-a711f564f201-config-volume\") pod \"ebbbeb29-093d-424c-aa21-a711f564f201\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.099809 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl9wp\" (UniqueName: \"kubernetes.io/projected/ebbbeb29-093d-424c-aa21-a711f564f201-kube-api-access-wl9wp\") pod \"ebbbeb29-093d-424c-aa21-a711f564f201\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.099879 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebbbeb29-093d-424c-aa21-a711f564f201-secret-volume\") pod \"ebbbeb29-093d-424c-aa21-a711f564f201\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.100770 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebbbeb29-093d-424c-aa21-a711f564f201-config-volume" (OuterVolumeSpecName: "config-volume") pod "ebbbeb29-093d-424c-aa21-a711f564f201" (UID: "ebbbeb29-093d-424c-aa21-a711f564f201"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.107679 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebbbeb29-093d-424c-aa21-a711f564f201-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ebbbeb29-093d-424c-aa21-a711f564f201" (UID: "ebbbeb29-093d-424c-aa21-a711f564f201"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.124463 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebbbeb29-093d-424c-aa21-a711f564f201-kube-api-access-wl9wp" (OuterVolumeSpecName: "kube-api-access-wl9wp") pod "ebbbeb29-093d-424c-aa21-a711f564f201" (UID: "ebbbeb29-093d-424c-aa21-a711f564f201"). InnerVolumeSpecName "kube-api-access-wl9wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.201471 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebbbeb29-093d-424c-aa21-a711f564f201-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.201521 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl9wp\" (UniqueName: \"kubernetes.io/projected/ebbbeb29-093d-424c-aa21-a711f564f201-kube-api-access-wl9wp\") on node \"crc\" DevicePath \"\"" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.201539 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebbbeb29-093d-424c-aa21-a711f564f201-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.785986 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" event={"ID":"ebbbeb29-093d-424c-aa21-a711f564f201","Type":"ContainerDied","Data":"91b11a148d927e25a5d57756e195a1d73d78980db20620c1818237ad4e45751f"} Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.786313 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91b11a148d927e25a5d57756e195a1d73d78980db20620c1818237ad4e45751f" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.786036 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:13 crc kubenswrapper[4820]: I0221 07:15:13.815880 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:15:13 crc kubenswrapper[4820]: I0221 07:15:13.816414 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:15:13 crc kubenswrapper[4820]: I0221 07:15:13.816456 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:15:13 crc kubenswrapper[4820]: I0221 07:15:13.817112 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:15:13 crc kubenswrapper[4820]: I0221 07:15:13.817169 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" gracePeriod=600 Feb 21 07:15:13 crc kubenswrapper[4820]: E0221 07:15:13.951440 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:15:14 crc kubenswrapper[4820]: I0221 07:15:14.864325 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" exitCode=0 Feb 21 07:15:14 crc kubenswrapper[4820]: I0221 07:15:14.864405 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4"} Feb 21 07:15:14 crc kubenswrapper[4820]: I0221 07:15:14.864708 4820 scope.go:117] "RemoveContainer" containerID="382dbabbc108418e0159c4f962ec6351f7f55d31b6d9ca634247ee411e9ee6e0" Feb 21 07:15:14 crc kubenswrapper[4820]: I0221 07:15:14.865119 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:15:14 crc kubenswrapper[4820]: E0221 07:15:14.865381 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:15:16 crc kubenswrapper[4820]: I0221 07:15:16.430327 4820 scope.go:117] "RemoveContainer" containerID="23c184a5e245f5facd743c3a7e6bea11c07b828a4d25451cb2550eaa44349110" Feb 21 07:15:16 crc kubenswrapper[4820]: I0221 07:15:16.454202 4820 scope.go:117] "RemoveContainer" containerID="841b7a62d1e6b92cb6679a13f353ab7adf29630b1c91e4ad2d0c98c9562682d7" Feb 21 07:15:16 crc kubenswrapper[4820]: I0221 07:15:16.472028 4820 scope.go:117] "RemoveContainer" containerID="ab7e68ddc2356c6ae5d0b5f7f63da545c73754b32e149e02621025d7c3d10d36" Feb 21 07:15:16 crc kubenswrapper[4820]: I0221 07:15:16.532944 4820 scope.go:117] "RemoveContainer" containerID="21769d7e4b9a4ff09d20e68b3668dbde7c57ce716fc232f4365f9370127b9d52" Feb 21 07:15:16 crc kubenswrapper[4820]: I0221 07:15:16.562774 4820 scope.go:117] "RemoveContainer" containerID="4cf28ea16018fb755adbd8f5f3ce5ec56799e0bc139946346840132dd9f3b8c1" Feb 21 07:15:27 crc kubenswrapper[4820]: I0221 07:15:27.696274 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:15:27 crc kubenswrapper[4820]: E0221 07:15:27.697125 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:15:41 crc kubenswrapper[4820]: I0221 07:15:41.697508 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:15:41 crc kubenswrapper[4820]: E0221 07:15:41.698347 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:15:54 crc kubenswrapper[4820]: I0221 07:15:54.696426 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:15:54 crc kubenswrapper[4820]: E0221 07:15:54.697442 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:16:08 crc kubenswrapper[4820]: I0221 07:16:08.696430 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:16:08 crc kubenswrapper[4820]: E0221 07:16:08.697129 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:16:20 crc kubenswrapper[4820]: I0221 07:16:20.696877 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:16:20 crc kubenswrapper[4820]: E0221 07:16:20.697964 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:16:33 crc kubenswrapper[4820]: I0221 07:16:33.697432 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:16:33 crc kubenswrapper[4820]: E0221 07:16:33.697848 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:16:45 crc kubenswrapper[4820]: I0221 07:16:45.704887 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:16:45 crc kubenswrapper[4820]: E0221 07:16:45.705564 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:16:57 crc kubenswrapper[4820]: I0221 07:16:57.696943 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:16:57 crc kubenswrapper[4820]: E0221 07:16:57.697380 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:17:08 crc kubenswrapper[4820]: I0221 07:17:08.696833 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:17:08 crc kubenswrapper[4820]: E0221 07:17:08.697835 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:17:21 crc kubenswrapper[4820]: I0221 07:17:21.697224 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:17:21 crc kubenswrapper[4820]: E0221 07:17:21.698101 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:17:35 crc kubenswrapper[4820]: I0221 07:17:35.702560 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:17:35 crc kubenswrapper[4820]: E0221 07:17:35.703555 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:17:47 crc kubenswrapper[4820]: I0221 07:17:47.696910 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:17:47 crc kubenswrapper[4820]: E0221 07:17:47.699571 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:18:01 crc kubenswrapper[4820]: I0221 07:18:01.697227 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:18:01 crc kubenswrapper[4820]: E0221 07:18:01.697904 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:18:15 crc kubenswrapper[4820]: I0221 07:18:15.707472 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:18:15 crc kubenswrapper[4820]: E0221 07:18:15.709062 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:18:27 crc kubenswrapper[4820]: I0221 07:18:27.697253 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:18:27 crc kubenswrapper[4820]: E0221 07:18:27.698049 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:18:39 crc kubenswrapper[4820]: I0221 07:18:39.697102 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:18:39 crc kubenswrapper[4820]: E0221 07:18:39.697864 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:18:52 crc kubenswrapper[4820]: I0221 07:18:52.696900 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:18:52 crc kubenswrapper[4820]: E0221 07:18:52.697724 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:19:06 crc kubenswrapper[4820]: I0221 07:19:06.696641 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:19:06 crc kubenswrapper[4820]: E0221 07:19:06.697363 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:19:17 crc kubenswrapper[4820]: I0221 07:19:17.697452 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:19:17 crc kubenswrapper[4820]: E0221 07:19:17.698349 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:19:30 crc kubenswrapper[4820]: I0221 07:19:30.697093 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:19:30 crc kubenswrapper[4820]: E0221 07:19:30.697884 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:19:41 crc kubenswrapper[4820]: I0221 07:19:41.697473 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:19:41 crc kubenswrapper[4820]: E0221 07:19:41.699339 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:19:55 crc kubenswrapper[4820]: I0221 07:19:55.701340 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:19:55 crc kubenswrapper[4820]: E0221 07:19:55.702162 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:20:06 crc kubenswrapper[4820]: I0221 07:20:06.696845 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:20:06 crc kubenswrapper[4820]: E0221 07:20:06.697645 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:20:19 crc kubenswrapper[4820]: I0221 07:20:19.697725 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:20:20 crc kubenswrapper[4820]: I0221 07:20:20.279923 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"864923974d73bbf665c1bb371fd49b8c1f45b2b5f96e7f7de515bffdc15084f9"} Feb 21 07:22:36 crc kubenswrapper[4820]: I0221 07:22:36.917853 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cdlsm"] Feb 21 07:22:36 crc kubenswrapper[4820]: E0221 07:22:36.919816 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbbeb29-093d-424c-aa21-a711f564f201" containerName="collect-profiles" Feb 21 07:22:36 crc kubenswrapper[4820]: I0221 07:22:36.919836 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbbeb29-093d-424c-aa21-a711f564f201" containerName="collect-profiles" Feb 21 07:22:36 crc kubenswrapper[4820]: I0221 07:22:36.920025 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbbeb29-093d-424c-aa21-a711f564f201" containerName="collect-profiles" Feb 21 07:22:36 crc kubenswrapper[4820]: I0221 07:22:36.921194 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:36 crc kubenswrapper[4820]: I0221 07:22:36.930871 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdlsm"] Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.050836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-utilities\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.051054 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbldk\" (UniqueName: \"kubernetes.io/projected/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-kube-api-access-nbldk\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.051274 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-catalog-content\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.153108 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-catalog-content\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.153174 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-utilities\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.153271 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbldk\" (UniqueName: \"kubernetes.io/projected/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-kube-api-access-nbldk\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.153921 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-catalog-content\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.153974 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-utilities\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.175455 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbldk\" (UniqueName: \"kubernetes.io/projected/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-kube-api-access-nbldk\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.277657 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.739707 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdlsm"] Feb 21 07:22:38 crc kubenswrapper[4820]: I0221 07:22:38.425448 4820 generic.go:334] "Generic (PLEG): container finished" podID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerID="19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6" exitCode=0 Feb 21 07:22:38 crc kubenswrapper[4820]: I0221 07:22:38.427649 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdlsm" event={"ID":"c54ffd60-01b5-4ac5-9466-eb97debf8fa9","Type":"ContainerDied","Data":"19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6"} Feb 21 07:22:38 crc kubenswrapper[4820]: I0221 07:22:38.427720 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdlsm" event={"ID":"c54ffd60-01b5-4ac5-9466-eb97debf8fa9","Type":"ContainerStarted","Data":"ce1a9f67f3249b9da77b6bfcf849a1f251744eca44d09eed846c3de212c14f17"} Feb 21 07:22:38 crc kubenswrapper[4820]: I0221 07:22:38.428638 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:22:39 crc kubenswrapper[4820]: I0221 07:22:39.435777 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdlsm" event={"ID":"c54ffd60-01b5-4ac5-9466-eb97debf8fa9","Type":"ContainerStarted","Data":"57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4"} Feb 21 07:22:40 crc kubenswrapper[4820]: I0221 07:22:40.447796 4820 generic.go:334] "Generic (PLEG): container finished" podID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerID="57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4" exitCode=0 Feb 21 07:22:40 crc kubenswrapper[4820]: I0221 07:22:40.447856 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdlsm" event={"ID":"c54ffd60-01b5-4ac5-9466-eb97debf8fa9","Type":"ContainerDied","Data":"57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4"} Feb 21 07:22:41 crc kubenswrapper[4820]: I0221 07:22:41.461606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdlsm" event={"ID":"c54ffd60-01b5-4ac5-9466-eb97debf8fa9","Type":"ContainerStarted","Data":"5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388"} Feb 21 07:22:41 crc kubenswrapper[4820]: I0221 07:22:41.494169 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cdlsm" podStartSLOduration=3.061840848 podStartE2EDuration="5.494146674s" podCreationTimestamp="2026-02-21 07:22:36 +0000 UTC" firstStartedPulling="2026-02-21 07:22:38.428178739 +0000 UTC m=+2133.461262977" lastFinishedPulling="2026-02-21 07:22:40.860484565 +0000 UTC m=+2135.893568803" observedRunningTime="2026-02-21 07:22:41.489506754 +0000 UTC m=+2136.522590982" watchObservedRunningTime="2026-02-21 07:22:41.494146674 +0000 UTC m=+2136.527230882" Feb 21 07:22:43 crc kubenswrapper[4820]: I0221 07:22:43.816658 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:22:43 crc kubenswrapper[4820]: I0221 07:22:43.817087 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:22:47 crc kubenswrapper[4820]: I0221 07:22:47.278752 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:47 crc kubenswrapper[4820]: I0221 07:22:47.279168 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:47 crc kubenswrapper[4820]: I0221 07:22:47.346981 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:47 crc kubenswrapper[4820]: I0221 07:22:47.548510 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:47 crc kubenswrapper[4820]: I0221 07:22:47.594513 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdlsm"] Feb 21 07:22:49 crc kubenswrapper[4820]: I0221 07:22:49.518902 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cdlsm" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="registry-server" containerID="cri-o://5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388" gracePeriod=2 Feb 21 07:22:49 crc kubenswrapper[4820]: I0221 07:22:49.992203 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.076127 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbldk\" (UniqueName: \"kubernetes.io/projected/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-kube-api-access-nbldk\") pod \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.076594 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-utilities\") pod \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.076822 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-catalog-content\") pod \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.077643 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-utilities" (OuterVolumeSpecName: "utilities") pod "c54ffd60-01b5-4ac5-9466-eb97debf8fa9" (UID: "c54ffd60-01b5-4ac5-9466-eb97debf8fa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.088929 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-kube-api-access-nbldk" (OuterVolumeSpecName: "kube-api-access-nbldk") pod "c54ffd60-01b5-4ac5-9466-eb97debf8fa9" (UID: "c54ffd60-01b5-4ac5-9466-eb97debf8fa9"). InnerVolumeSpecName "kube-api-access-nbldk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.115304 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c54ffd60-01b5-4ac5-9466-eb97debf8fa9" (UID: "c54ffd60-01b5-4ac5-9466-eb97debf8fa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.178416 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.178460 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbldk\" (UniqueName: \"kubernetes.io/projected/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-kube-api-access-nbldk\") on node \"crc\" DevicePath \"\"" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.178474 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.534668 4820 generic.go:334] "Generic (PLEG): container finished" podID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerID="5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388" exitCode=0 Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.534751 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdlsm" event={"ID":"c54ffd60-01b5-4ac5-9466-eb97debf8fa9","Type":"ContainerDied","Data":"5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388"} Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.534841 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdlsm" event={"ID":"c54ffd60-01b5-4ac5-9466-eb97debf8fa9","Type":"ContainerDied","Data":"ce1a9f67f3249b9da77b6bfcf849a1f251744eca44d09eed846c3de212c14f17"} Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.534884 4820 scope.go:117] "RemoveContainer" containerID="5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.534938 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.582574 4820 scope.go:117] "RemoveContainer" containerID="57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.621828 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdlsm"] Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.622855 4820 scope.go:117] "RemoveContainer" containerID="19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.631787 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdlsm"] Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.655178 4820 scope.go:117] "RemoveContainer" containerID="5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388" Feb 21 07:22:50 crc kubenswrapper[4820]: E0221 07:22:50.655714 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388\": container with ID starting with 5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388 not found: ID does not exist" containerID="5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.655798 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388"} err="failed to get container status \"5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388\": rpc error: code = NotFound desc = could not find container \"5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388\": container with ID starting with 5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388 not found: ID does not exist" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.655850 4820 scope.go:117] "RemoveContainer" containerID="57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4" Feb 21 07:22:50 crc kubenswrapper[4820]: E0221 07:22:50.656157 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4\": container with ID starting with 57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4 not found: ID does not exist" containerID="57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.656192 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4"} err="failed to get container status \"57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4\": rpc error: code = NotFound desc = could not find container \"57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4\": container with ID starting with 57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4 not found: ID does not exist" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.656213 4820 scope.go:117] "RemoveContainer" containerID="19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6" Feb 21 07:22:50 crc kubenswrapper[4820]: E0221 07:22:50.656893 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6\": container with ID starting with 19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6 not found: ID does not exist" containerID="19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.656921 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6"} err="failed to get container status \"19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6\": rpc error: code = NotFound desc = could not find container \"19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6\": container with ID starting with 19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6 not found: ID does not exist" Feb 21 07:22:51 crc kubenswrapper[4820]: I0221 07:22:51.707552 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" path="/var/lib/kubelet/pods/c54ffd60-01b5-4ac5-9466-eb97debf8fa9/volumes" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.022383 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rrjdr"] Feb 21 07:22:56 crc kubenswrapper[4820]: E0221 07:22:56.023234 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="extract-content" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.023300 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="extract-content" Feb 21 07:22:56 crc kubenswrapper[4820]: E0221 07:22:56.023358 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="extract-utilities" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.023376 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="extract-utilities" Feb 21 07:22:56 crc kubenswrapper[4820]: E0221 07:22:56.023399 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="registry-server" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.023416 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="registry-server" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.023757 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="registry-server" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.025736 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.039925 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrjdr"] Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.168486 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-catalog-content\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.168564 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-utilities\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.168607 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khskj\" (UniqueName: \"kubernetes.io/projected/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-kube-api-access-khskj\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.269953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-catalog-content\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.270610 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-utilities\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.270526 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-catalog-content\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.270910 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-utilities\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.270982 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khskj\" (UniqueName: \"kubernetes.io/projected/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-kube-api-access-khskj\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.293488 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khskj\" (UniqueName: \"kubernetes.io/projected/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-kube-api-access-khskj\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.359220 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.826604 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrjdr"] Feb 21 07:22:57 crc kubenswrapper[4820]: I0221 07:22:57.594512 4820 generic.go:334] "Generic (PLEG): container finished" podID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerID="662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8" exitCode=0 Feb 21 07:22:57 crc kubenswrapper[4820]: I0221 07:22:57.594584 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrjdr" event={"ID":"b6ab96ec-4842-4dbf-bb94-58ebaac1a551","Type":"ContainerDied","Data":"662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8"} Feb 21 07:22:57 crc kubenswrapper[4820]: I0221 07:22:57.594642 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrjdr" event={"ID":"b6ab96ec-4842-4dbf-bb94-58ebaac1a551","Type":"ContainerStarted","Data":"7dad3769cb5d649f5dc179f5360f48af0dbab75bb74b9c79adf06a61b5a619cb"} Feb 21 07:22:58 crc kubenswrapper[4820]: I0221 07:22:58.604476 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrjdr" event={"ID":"b6ab96ec-4842-4dbf-bb94-58ebaac1a551","Type":"ContainerStarted","Data":"5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6"} Feb 21 07:22:59 crc kubenswrapper[4820]: I0221 07:22:59.614967 4820 generic.go:334] "Generic (PLEG): container finished" podID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerID="5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6" exitCode=0 Feb 21 07:22:59 crc kubenswrapper[4820]: I0221 07:22:59.615023 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrjdr" event={"ID":"b6ab96ec-4842-4dbf-bb94-58ebaac1a551","Type":"ContainerDied","Data":"5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6"} Feb 21 07:23:00 crc kubenswrapper[4820]: I0221 07:23:00.627291 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrjdr" event={"ID":"b6ab96ec-4842-4dbf-bb94-58ebaac1a551","Type":"ContainerStarted","Data":"83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660"} Feb 21 07:23:00 crc kubenswrapper[4820]: I0221 07:23:00.656271 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rrjdr" podStartSLOduration=3.237053367 podStartE2EDuration="5.656221702s" podCreationTimestamp="2026-02-21 07:22:55 +0000 UTC" firstStartedPulling="2026-02-21 07:22:57.596851307 +0000 UTC m=+2152.629935545" lastFinishedPulling="2026-02-21 07:23:00.016019642 +0000 UTC m=+2155.049103880" observedRunningTime="2026-02-21 07:23:00.649629194 +0000 UTC m=+2155.682713442" watchObservedRunningTime="2026-02-21 07:23:00.656221702 +0000 UTC m=+2155.689305930" Feb 21 07:23:06 crc kubenswrapper[4820]: I0221 07:23:06.359895 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:23:06 crc kubenswrapper[4820]: I0221 07:23:06.360625 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:23:07 crc kubenswrapper[4820]: I0221 07:23:07.423072 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rrjdr" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="registry-server" probeResult="failure" output=< Feb 21 07:23:07 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 07:23:07 crc kubenswrapper[4820]: > Feb 21 07:23:13 crc kubenswrapper[4820]: I0221 07:23:13.816084 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:23:13 crc kubenswrapper[4820]: I0221 07:23:13.816689 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:23:16 crc kubenswrapper[4820]: I0221 07:23:16.434151 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:23:16 crc kubenswrapper[4820]: I0221 07:23:16.494304 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:23:16 crc kubenswrapper[4820]: I0221 07:23:16.691854 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rrjdr"] Feb 21 07:23:17 crc kubenswrapper[4820]: I0221 07:23:17.751775 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rrjdr" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="registry-server" containerID="cri-o://83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660" gracePeriod=2 Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.193528 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.356442 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-utilities\") pod \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.356591 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khskj\" (UniqueName: \"kubernetes.io/projected/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-kube-api-access-khskj\") pod \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.356744 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-catalog-content\") pod \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.357631 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-utilities" (OuterVolumeSpecName: "utilities") pod "b6ab96ec-4842-4dbf-bb94-58ebaac1a551" (UID: "b6ab96ec-4842-4dbf-bb94-58ebaac1a551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.365426 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-kube-api-access-khskj" (OuterVolumeSpecName: "kube-api-access-khskj") pod "b6ab96ec-4842-4dbf-bb94-58ebaac1a551" (UID: "b6ab96ec-4842-4dbf-bb94-58ebaac1a551"). InnerVolumeSpecName "kube-api-access-khskj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.458154 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khskj\" (UniqueName: \"kubernetes.io/projected/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-kube-api-access-khskj\") on node \"crc\" DevicePath \"\"" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.458194 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.527641 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6ab96ec-4842-4dbf-bb94-58ebaac1a551" (UID: "b6ab96ec-4842-4dbf-bb94-58ebaac1a551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.559703 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.763126 4820 generic.go:334] "Generic (PLEG): container finished" podID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerID="83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660" exitCode=0 Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.763209 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrjdr" event={"ID":"b6ab96ec-4842-4dbf-bb94-58ebaac1a551","Type":"ContainerDied","Data":"83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660"} Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.763306 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.763328 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrjdr" event={"ID":"b6ab96ec-4842-4dbf-bb94-58ebaac1a551","Type":"ContainerDied","Data":"7dad3769cb5d649f5dc179f5360f48af0dbab75bb74b9c79adf06a61b5a619cb"} Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.763358 4820 scope.go:117] "RemoveContainer" containerID="83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.791586 4820 scope.go:117] "RemoveContainer" containerID="5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.822576 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rrjdr"] Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.837119 4820 scope.go:117] "RemoveContainer" containerID="662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.838574 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rrjdr"] Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.859216 4820 scope.go:117] "RemoveContainer" containerID="83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660" Feb 21 07:23:18 crc kubenswrapper[4820]: E0221 07:23:18.860202 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660\": container with ID starting with 83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660 not found: ID does not exist" containerID="83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.860295 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660"} err="failed to get container status \"83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660\": rpc error: code = NotFound desc = could not find container \"83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660\": container with ID starting with 83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660 not found: ID does not exist" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.860370 4820 scope.go:117] "RemoveContainer" containerID="5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6" Feb 21 07:23:18 crc kubenswrapper[4820]: E0221 07:23:18.860866 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6\": container with ID starting with 5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6 not found: ID does not exist" containerID="5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.860945 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6"} err="failed to get container status \"5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6\": rpc error: code = NotFound desc = could not find container \"5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6\": container with ID starting with 5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6 not found: ID does not exist" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.860990 4820 scope.go:117] "RemoveContainer" containerID="662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8" Feb 21 07:23:18 crc kubenswrapper[4820]: E0221 07:23:18.861768 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8\": container with ID starting with 662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8 not found: ID does not exist" containerID="662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.861803 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8"} err="failed to get container status \"662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8\": rpc error: code = NotFound desc = could not find container \"662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8\": container with ID starting with 662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8 not found: ID does not exist" Feb 21 07:23:19 crc kubenswrapper[4820]: I0221 07:23:19.713931 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" path="/var/lib/kubelet/pods/b6ab96ec-4842-4dbf-bb94-58ebaac1a551/volumes" Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.816752 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.817270 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.817320 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.817875 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"864923974d73bbf665c1bb371fd49b8c1f45b2b5f96e7f7de515bffdc15084f9"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.817922 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://864923974d73bbf665c1bb371fd49b8c1f45b2b5f96e7f7de515bffdc15084f9" gracePeriod=600 Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.994368 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"864923974d73bbf665c1bb371fd49b8c1f45b2b5f96e7f7de515bffdc15084f9"} Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.994370 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="864923974d73bbf665c1bb371fd49b8c1f45b2b5f96e7f7de515bffdc15084f9" exitCode=0 Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.994859 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:23:45 crc kubenswrapper[4820]: I0221 07:23:45.004034 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799"} Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.618078 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v7ml8"] Feb 21 07:24:17 crc kubenswrapper[4820]: E0221 07:24:17.619360 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="extract-utilities" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.619385 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="extract-utilities" Feb 21 07:24:17 crc kubenswrapper[4820]: E0221 07:24:17.619440 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="registry-server" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.619454 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="registry-server" Feb 21 07:24:17 crc kubenswrapper[4820]: E0221 07:24:17.619480 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="extract-content" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.619494 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="extract-content" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.619805 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="registry-server" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.621979 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.638317 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v7ml8"] Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.749519 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-catalog-content\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.749573 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmddr\" (UniqueName: \"kubernetes.io/projected/133ffeb7-28b1-4e97-a617-84328eac0f17-kube-api-access-kmddr\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.749590 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-utilities\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.851286 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-utilities\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.851343 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmddr\" (UniqueName: \"kubernetes.io/projected/133ffeb7-28b1-4e97-a617-84328eac0f17-kube-api-access-kmddr\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.851542 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-catalog-content\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.852573 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-catalog-content\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.852908 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-utilities\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.886346 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmddr\" (UniqueName: \"kubernetes.io/projected/133ffeb7-28b1-4e97-a617-84328eac0f17-kube-api-access-kmddr\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.947633 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:18 crc kubenswrapper[4820]: I0221 07:24:18.488139 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v7ml8"] Feb 21 07:24:19 crc kubenswrapper[4820]: I0221 07:24:19.259892 4820 generic.go:334] "Generic (PLEG): container finished" podID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerID="bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2" exitCode=0 Feb 21 07:24:19 crc kubenswrapper[4820]: I0221 07:24:19.259965 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7ml8" event={"ID":"133ffeb7-28b1-4e97-a617-84328eac0f17","Type":"ContainerDied","Data":"bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2"} Feb 21 07:24:19 crc kubenswrapper[4820]: I0221 07:24:19.260209 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7ml8" event={"ID":"133ffeb7-28b1-4e97-a617-84328eac0f17","Type":"ContainerStarted","Data":"6bc53e469972753514545b99cd59ba9fb24a9e09aeb649985dd2366cd22715e8"} Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.190981 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nz2j4"] Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.192583 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.200886 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nz2j4"] Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.268126 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7ml8" event={"ID":"133ffeb7-28b1-4e97-a617-84328eac0f17","Type":"ContainerStarted","Data":"77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0"} Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.285618 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-utilities\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.285681 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvrkc\" (UniqueName: \"kubernetes.io/projected/0affc452-556a-4307-9201-fed39571b1d0-kube-api-access-zvrkc\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.285723 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-catalog-content\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.386834 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-utilities\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.386957 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvrkc\" (UniqueName: \"kubernetes.io/projected/0affc452-556a-4307-9201-fed39571b1d0-kube-api-access-zvrkc\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.387012 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-catalog-content\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.387928 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-utilities\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.388098 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-catalog-content\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.429135 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvrkc\" (UniqueName: \"kubernetes.io/projected/0affc452-556a-4307-9201-fed39571b1d0-kube-api-access-zvrkc\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.518925 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.964371 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nz2j4"] Feb 21 07:24:21 crc kubenswrapper[4820]: I0221 07:24:21.278115 4820 generic.go:334] "Generic (PLEG): container finished" podID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerID="77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0" exitCode=0 Feb 21 07:24:21 crc kubenswrapper[4820]: I0221 07:24:21.278204 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7ml8" event={"ID":"133ffeb7-28b1-4e97-a617-84328eac0f17","Type":"ContainerDied","Data":"77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0"} Feb 21 07:24:21 crc kubenswrapper[4820]: I0221 07:24:21.280834 4820 generic.go:334] "Generic (PLEG): container finished" podID="0affc452-556a-4307-9201-fed39571b1d0" containerID="f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab" exitCode=0 Feb 21 07:24:21 crc kubenswrapper[4820]: I0221 07:24:21.280867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz2j4" event={"ID":"0affc452-556a-4307-9201-fed39571b1d0","Type":"ContainerDied","Data":"f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab"} Feb 21 07:24:21 crc kubenswrapper[4820]: I0221 07:24:21.280890 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz2j4" event={"ID":"0affc452-556a-4307-9201-fed39571b1d0","Type":"ContainerStarted","Data":"2963a33f03df3165cacdeb753981d1b27c38d9d369803edadbd56752c233cb3f"} Feb 21 07:24:22 crc kubenswrapper[4820]: I0221 07:24:22.289827 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz2j4" event={"ID":"0affc452-556a-4307-9201-fed39571b1d0","Type":"ContainerStarted","Data":"5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b"} Feb 21 07:24:22 crc kubenswrapper[4820]: I0221 07:24:22.292786 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7ml8" event={"ID":"133ffeb7-28b1-4e97-a617-84328eac0f17","Type":"ContainerStarted","Data":"a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad"} Feb 21 07:24:22 crc kubenswrapper[4820]: I0221 07:24:22.340060 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v7ml8" podStartSLOduration=2.788103926 podStartE2EDuration="5.340043884s" podCreationTimestamp="2026-02-21 07:24:17 +0000 UTC" firstStartedPulling="2026-02-21 07:24:19.264586622 +0000 UTC m=+2234.297670820" lastFinishedPulling="2026-02-21 07:24:21.81652656 +0000 UTC m=+2236.849610778" observedRunningTime="2026-02-21 07:24:22.337931246 +0000 UTC m=+2237.371015444" watchObservedRunningTime="2026-02-21 07:24:22.340043884 +0000 UTC m=+2237.373128082" Feb 21 07:24:23 crc kubenswrapper[4820]: I0221 07:24:23.300650 4820 generic.go:334] "Generic (PLEG): container finished" podID="0affc452-556a-4307-9201-fed39571b1d0" containerID="5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b" exitCode=0 Feb 21 07:24:23 crc kubenswrapper[4820]: I0221 07:24:23.300719 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz2j4" event={"ID":"0affc452-556a-4307-9201-fed39571b1d0","Type":"ContainerDied","Data":"5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b"} Feb 21 07:24:24 crc kubenswrapper[4820]: I0221 07:24:24.323311 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz2j4" event={"ID":"0affc452-556a-4307-9201-fed39571b1d0","Type":"ContainerStarted","Data":"d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365"} Feb 21 07:24:24 crc kubenswrapper[4820]: I0221 07:24:24.347024 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nz2j4" podStartSLOduration=1.947210765 podStartE2EDuration="4.347008415s" podCreationTimestamp="2026-02-21 07:24:20 +0000 UTC" firstStartedPulling="2026-02-21 07:24:21.282129441 +0000 UTC m=+2236.315213649" lastFinishedPulling="2026-02-21 07:24:23.681927101 +0000 UTC m=+2238.715011299" observedRunningTime="2026-02-21 07:24:24.342043751 +0000 UTC m=+2239.375127949" watchObservedRunningTime="2026-02-21 07:24:24.347008415 +0000 UTC m=+2239.380092613" Feb 21 07:24:27 crc kubenswrapper[4820]: I0221 07:24:27.948034 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:27 crc kubenswrapper[4820]: I0221 07:24:27.948982 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:27 crc kubenswrapper[4820]: I0221 07:24:27.987482 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:28 crc kubenswrapper[4820]: I0221 07:24:28.396351 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:28 crc kubenswrapper[4820]: I0221 07:24:28.581533 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v7ml8"] Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.370364 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v7ml8" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="registry-server" containerID="cri-o://a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad" gracePeriod=2 Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.519656 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.519716 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.587622 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.785571 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.848772 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmddr\" (UniqueName: \"kubernetes.io/projected/133ffeb7-28b1-4e97-a617-84328eac0f17-kube-api-access-kmddr\") pod \"133ffeb7-28b1-4e97-a617-84328eac0f17\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.848856 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-catalog-content\") pod \"133ffeb7-28b1-4e97-a617-84328eac0f17\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.848891 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-utilities\") pod \"133ffeb7-28b1-4e97-a617-84328eac0f17\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.850091 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-utilities" (OuterVolumeSpecName: "utilities") pod "133ffeb7-28b1-4e97-a617-84328eac0f17" (UID: "133ffeb7-28b1-4e97-a617-84328eac0f17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.853892 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/133ffeb7-28b1-4e97-a617-84328eac0f17-kube-api-access-kmddr" (OuterVolumeSpecName: "kube-api-access-kmddr") pod "133ffeb7-28b1-4e97-a617-84328eac0f17" (UID: "133ffeb7-28b1-4e97-a617-84328eac0f17"). InnerVolumeSpecName "kube-api-access-kmddr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.898877 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "133ffeb7-28b1-4e97-a617-84328eac0f17" (UID: "133ffeb7-28b1-4e97-a617-84328eac0f17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.949716 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmddr\" (UniqueName: \"kubernetes.io/projected/133ffeb7-28b1-4e97-a617-84328eac0f17-kube-api-access-kmddr\") on node \"crc\" DevicePath \"\"" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.949753 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.949762 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.381208 4820 generic.go:334] "Generic (PLEG): container finished" podID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerID="a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad" exitCode=0 Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.381329 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.381340 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7ml8" event={"ID":"133ffeb7-28b1-4e97-a617-84328eac0f17","Type":"ContainerDied","Data":"a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad"} Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.381393 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7ml8" event={"ID":"133ffeb7-28b1-4e97-a617-84328eac0f17","Type":"ContainerDied","Data":"6bc53e469972753514545b99cd59ba9fb24a9e09aeb649985dd2366cd22715e8"} Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.381417 4820 scope.go:117] "RemoveContainer" containerID="a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.399444 4820 scope.go:117] "RemoveContainer" containerID="77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.421419 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v7ml8"] Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.428416 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v7ml8"] Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.433483 4820 scope.go:117] "RemoveContainer" containerID="bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.450124 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.457500 4820 scope.go:117] "RemoveContainer" containerID="a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad" Feb 21 07:24:31 crc kubenswrapper[4820]: E0221 07:24:31.458072 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad\": container with ID starting with a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad not found: ID does not exist" containerID="a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.458128 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad"} err="failed to get container status \"a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad\": rpc error: code = NotFound desc = could not find container \"a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad\": container with ID starting with a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad not found: ID does not exist" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.458155 4820 scope.go:117] "RemoveContainer" containerID="77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0" Feb 21 07:24:31 crc kubenswrapper[4820]: E0221 07:24:31.458620 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0\": container with ID starting with 77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0 not found: ID does not exist" containerID="77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.458766 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0"} err="failed to get container status \"77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0\": rpc error: code = NotFound desc = could not find container \"77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0\": container with ID starting with 77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0 not found: ID does not exist" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.458813 4820 scope.go:117] "RemoveContainer" containerID="bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2" Feb 21 07:24:31 crc kubenswrapper[4820]: E0221 07:24:31.459169 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2\": container with ID starting with bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2 not found: ID does not exist" containerID="bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.459198 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2"} err="failed to get container status \"bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2\": rpc error: code = NotFound desc = could not find container \"bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2\": container with ID starting with bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2 not found: ID does not exist" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.710908 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" path="/var/lib/kubelet/pods/133ffeb7-28b1-4e97-a617-84328eac0f17/volumes" Feb 21 07:24:33 crc kubenswrapper[4820]: I0221 07:24:33.583331 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nz2j4"] Feb 21 07:24:33 crc kubenswrapper[4820]: I0221 07:24:33.583845 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nz2j4" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="registry-server" containerID="cri-o://d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365" gracePeriod=2 Feb 21 07:24:33 crc kubenswrapper[4820]: I0221 07:24:33.964945 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.098012 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-catalog-content\") pod \"0affc452-556a-4307-9201-fed39571b1d0\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.098190 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-utilities\") pod \"0affc452-556a-4307-9201-fed39571b1d0\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.098948 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-utilities" (OuterVolumeSpecName: "utilities") pod "0affc452-556a-4307-9201-fed39571b1d0" (UID: "0affc452-556a-4307-9201-fed39571b1d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.099037 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvrkc\" (UniqueName: \"kubernetes.io/projected/0affc452-556a-4307-9201-fed39571b1d0-kube-api-access-zvrkc\") pod \"0affc452-556a-4307-9201-fed39571b1d0\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.100453 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.103745 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0affc452-556a-4307-9201-fed39571b1d0-kube-api-access-zvrkc" (OuterVolumeSpecName: "kube-api-access-zvrkc") pod "0affc452-556a-4307-9201-fed39571b1d0" (UID: "0affc452-556a-4307-9201-fed39571b1d0"). InnerVolumeSpecName "kube-api-access-zvrkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.154915 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0affc452-556a-4307-9201-fed39571b1d0" (UID: "0affc452-556a-4307-9201-fed39571b1d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.200896 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvrkc\" (UniqueName: \"kubernetes.io/projected/0affc452-556a-4307-9201-fed39571b1d0-kube-api-access-zvrkc\") on node \"crc\" DevicePath \"\"" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.200927 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.405773 4820 generic.go:334] "Generic (PLEG): container finished" podID="0affc452-556a-4307-9201-fed39571b1d0" containerID="d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365" exitCode=0 Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.405868 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.405866 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz2j4" event={"ID":"0affc452-556a-4307-9201-fed39571b1d0","Type":"ContainerDied","Data":"d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365"} Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.406289 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz2j4" event={"ID":"0affc452-556a-4307-9201-fed39571b1d0","Type":"ContainerDied","Data":"2963a33f03df3165cacdeb753981d1b27c38d9d369803edadbd56752c233cb3f"} Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.406315 4820 scope.go:117] "RemoveContainer" containerID="d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.427191 4820 scope.go:117] "RemoveContainer" containerID="5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.445178 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nz2j4"] Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.451326 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nz2j4"] Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.462153 4820 scope.go:117] "RemoveContainer" containerID="f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.475323 4820 scope.go:117] "RemoveContainer" containerID="d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365" Feb 21 07:24:34 crc kubenswrapper[4820]: E0221 07:24:34.475585 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365\": container with ID starting with d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365 not found: ID does not exist" containerID="d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.475692 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365"} err="failed to get container status \"d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365\": rpc error: code = NotFound desc = could not find container \"d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365\": container with ID starting with d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365 not found: ID does not exist" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.475791 4820 scope.go:117] "RemoveContainer" containerID="5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b" Feb 21 07:24:34 crc kubenswrapper[4820]: E0221 07:24:34.476191 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b\": container with ID starting with 5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b not found: ID does not exist" containerID="5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.476215 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b"} err="failed to get container status \"5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b\": rpc error: code = NotFound desc = could not find container \"5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b\": container with ID starting with 5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b not found: ID does not exist" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.476229 4820 scope.go:117] "RemoveContainer" containerID="f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab" Feb 21 07:24:34 crc kubenswrapper[4820]: E0221 07:24:34.476521 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab\": container with ID starting with f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab not found: ID does not exist" containerID="f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.476601 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab"} err="failed to get container status \"f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab\": rpc error: code = NotFound desc = could not find container \"f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab\": container with ID starting with f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab not found: ID does not exist" Feb 21 07:24:35 crc kubenswrapper[4820]: I0221 07:24:35.713743 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0affc452-556a-4307-9201-fed39571b1d0" path="/var/lib/kubelet/pods/0affc452-556a-4307-9201-fed39571b1d0/volumes" Feb 21 07:26:13 crc kubenswrapper[4820]: I0221 07:26:13.816047 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:26:13 crc kubenswrapper[4820]: I0221 07:26:13.816592 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:26:43 crc kubenswrapper[4820]: I0221 07:26:43.816540 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:26:43 crc kubenswrapper[4820]: I0221 07:26:43.818694 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:27:13 crc kubenswrapper[4820]: I0221 07:27:13.816579 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:27:13 crc kubenswrapper[4820]: I0221 07:27:13.817165 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:27:13 crc kubenswrapper[4820]: I0221 07:27:13.817219 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:27:13 crc kubenswrapper[4820]: I0221 07:27:13.817925 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:27:13 crc kubenswrapper[4820]: I0221 07:27:13.818002 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" gracePeriod=600 Feb 21 07:27:13 crc kubenswrapper[4820]: E0221 07:27:13.941523 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:27:14 crc kubenswrapper[4820]: I0221 07:27:14.730162 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" exitCode=0 Feb 21 07:27:14 crc kubenswrapper[4820]: I0221 07:27:14.730207 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799"} Feb 21 07:27:14 crc kubenswrapper[4820]: I0221 07:27:14.730258 4820 scope.go:117] "RemoveContainer" containerID="864923974d73bbf665c1bb371fd49b8c1f45b2b5f96e7f7de515bffdc15084f9" Feb 21 07:27:14 crc kubenswrapper[4820]: I0221 07:27:14.730735 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:27:14 crc kubenswrapper[4820]: E0221 07:27:14.730952 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:27:25 crc kubenswrapper[4820]: I0221 07:27:25.702518 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:27:25 crc kubenswrapper[4820]: E0221 07:27:25.703583 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:27:36 crc kubenswrapper[4820]: I0221 07:27:36.696386 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:27:36 crc kubenswrapper[4820]: E0221 07:27:36.697111 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:27:50 crc kubenswrapper[4820]: I0221 07:27:50.696688 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:27:50 crc kubenswrapper[4820]: E0221 07:27:50.697610 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:28:05 crc kubenswrapper[4820]: I0221 07:28:05.701880 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:28:05 crc kubenswrapper[4820]: E0221 07:28:05.702565 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:28:17 crc kubenswrapper[4820]: I0221 07:28:17.697919 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:28:17 crc kubenswrapper[4820]: E0221 07:28:17.698634 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:28:29 crc kubenswrapper[4820]: I0221 07:28:29.697037 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:28:29 crc kubenswrapper[4820]: E0221 07:28:29.697884 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:28:41 crc kubenswrapper[4820]: I0221 07:28:41.697320 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:28:41 crc kubenswrapper[4820]: E0221 07:28:41.698565 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:28:56 crc kubenswrapper[4820]: I0221 07:28:56.697354 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:28:56 crc kubenswrapper[4820]: E0221 07:28:56.699683 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:29:07 crc kubenswrapper[4820]: I0221 07:29:07.697093 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:29:07 crc kubenswrapper[4820]: E0221 07:29:07.698338 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:29:21 crc kubenswrapper[4820]: I0221 07:29:21.698044 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:29:21 crc kubenswrapper[4820]: E0221 07:29:21.699466 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:29:32 crc kubenswrapper[4820]: I0221 07:29:32.697350 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:29:32 crc kubenswrapper[4820]: E0221 07:29:32.698595 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:29:43 crc kubenswrapper[4820]: I0221 07:29:43.697060 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:29:43 crc kubenswrapper[4820]: E0221 07:29:43.697817 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:29:56 crc kubenswrapper[4820]: I0221 07:29:56.697014 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:29:56 crc kubenswrapper[4820]: E0221 07:29:56.697748 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.180817 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb"] Feb 21 07:30:00 crc kubenswrapper[4820]: E0221 07:30:00.183079 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="extract-content" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.183491 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="extract-content" Feb 21 07:30:00 crc kubenswrapper[4820]: E0221 07:30:00.183739 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="extract-utilities" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.183937 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="extract-utilities" Feb 21 07:30:00 crc kubenswrapper[4820]: E0221 07:30:00.184174 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="extract-content" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.184417 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="extract-content" Feb 21 07:30:00 crc kubenswrapper[4820]: E0221 07:30:00.184635 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="registry-server" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.184828 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="registry-server" Feb 21 07:30:00 crc kubenswrapper[4820]: E0221 07:30:00.185070 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="registry-server" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.185318 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="registry-server" Feb 21 07:30:00 crc kubenswrapper[4820]: E0221 07:30:00.185537 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="extract-utilities" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.185722 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="extract-utilities" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.186953 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="registry-server" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.187223 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="registry-server" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.188523 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.192404 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.193808 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb"] Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.194050 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.245311 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5669n\" (UniqueName: \"kubernetes.io/projected/9686bf95-baf7-4066-8769-66f168be0215-kube-api-access-5669n\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.245434 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9686bf95-baf7-4066-8769-66f168be0215-secret-volume\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.245584 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9686bf95-baf7-4066-8769-66f168be0215-config-volume\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.346933 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9686bf95-baf7-4066-8769-66f168be0215-config-volume\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.347356 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5669n\" (UniqueName: \"kubernetes.io/projected/9686bf95-baf7-4066-8769-66f168be0215-kube-api-access-5669n\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.347688 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9686bf95-baf7-4066-8769-66f168be0215-secret-volume\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.347774 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9686bf95-baf7-4066-8769-66f168be0215-config-volume\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.359574 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9686bf95-baf7-4066-8769-66f168be0215-secret-volume\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.366123 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5669n\" (UniqueName: \"kubernetes.io/projected/9686bf95-baf7-4066-8769-66f168be0215-kube-api-access-5669n\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.514109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.980563 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb"] Feb 21 07:30:01 crc kubenswrapper[4820]: I0221 07:30:01.133958 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" event={"ID":"9686bf95-baf7-4066-8769-66f168be0215","Type":"ContainerStarted","Data":"797b9f6f306dee486593dfb28bed25626861fabec5ee5e0d93c1a16dafdc8bfc"} Feb 21 07:30:02 crc kubenswrapper[4820]: I0221 07:30:02.148020 4820 generic.go:334] "Generic (PLEG): container finished" podID="9686bf95-baf7-4066-8769-66f168be0215" containerID="c2867835bac0090aaa7273a7c4ef4cb3c7da8d37f816ccb9d979c732e69cab4f" exitCode=0 Feb 21 07:30:02 crc kubenswrapper[4820]: I0221 07:30:02.148121 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" event={"ID":"9686bf95-baf7-4066-8769-66f168be0215","Type":"ContainerDied","Data":"c2867835bac0090aaa7273a7c4ef4cb3c7da8d37f816ccb9d979c732e69cab4f"} Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.446422 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.594124 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5669n\" (UniqueName: \"kubernetes.io/projected/9686bf95-baf7-4066-8769-66f168be0215-kube-api-access-5669n\") pod \"9686bf95-baf7-4066-8769-66f168be0215\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.594205 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9686bf95-baf7-4066-8769-66f168be0215-secret-volume\") pod \"9686bf95-baf7-4066-8769-66f168be0215\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.595172 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9686bf95-baf7-4066-8769-66f168be0215-config-volume" (OuterVolumeSpecName: "config-volume") pod "9686bf95-baf7-4066-8769-66f168be0215" (UID: "9686bf95-baf7-4066-8769-66f168be0215"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.595211 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9686bf95-baf7-4066-8769-66f168be0215-config-volume\") pod \"9686bf95-baf7-4066-8769-66f168be0215\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.595408 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9686bf95-baf7-4066-8769-66f168be0215-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.600847 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686bf95-baf7-4066-8769-66f168be0215-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9686bf95-baf7-4066-8769-66f168be0215" (UID: "9686bf95-baf7-4066-8769-66f168be0215"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.601116 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9686bf95-baf7-4066-8769-66f168be0215-kube-api-access-5669n" (OuterVolumeSpecName: "kube-api-access-5669n") pod "9686bf95-baf7-4066-8769-66f168be0215" (UID: "9686bf95-baf7-4066-8769-66f168be0215"). InnerVolumeSpecName "kube-api-access-5669n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.696976 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9686bf95-baf7-4066-8769-66f168be0215-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.697072 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5669n\" (UniqueName: \"kubernetes.io/projected/9686bf95-baf7-4066-8769-66f168be0215-kube-api-access-5669n\") on node \"crc\" DevicePath \"\"" Feb 21 07:30:04 crc kubenswrapper[4820]: I0221 07:30:04.177029 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" event={"ID":"9686bf95-baf7-4066-8769-66f168be0215","Type":"ContainerDied","Data":"797b9f6f306dee486593dfb28bed25626861fabec5ee5e0d93c1a16dafdc8bfc"} Feb 21 07:30:04 crc kubenswrapper[4820]: I0221 07:30:04.177086 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="797b9f6f306dee486593dfb28bed25626861fabec5ee5e0d93c1a16dafdc8bfc" Feb 21 07:30:04 crc kubenswrapper[4820]: I0221 07:30:04.177123 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:04 crc kubenswrapper[4820]: I0221 07:30:04.542383 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9"] Feb 21 07:30:04 crc kubenswrapper[4820]: I0221 07:30:04.554146 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9"] Feb 21 07:30:05 crc kubenswrapper[4820]: I0221 07:30:05.706778 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b009b00-dfa6-40ba-b629-608fc71dc429" path="/var/lib/kubelet/pods/0b009b00-dfa6-40ba-b629-608fc71dc429/volumes" Feb 21 07:30:11 crc kubenswrapper[4820]: I0221 07:30:11.697947 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:30:11 crc kubenswrapper[4820]: E0221 07:30:11.698658 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:30:16 crc kubenswrapper[4820]: I0221 07:30:16.986825 4820 scope.go:117] "RemoveContainer" containerID="d8fad70d0ffc026935b7857a9983aa7bde367f1ccdb48c593f103452b34e3bae" Feb 21 07:30:22 crc kubenswrapper[4820]: I0221 07:30:22.696999 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:30:22 crc kubenswrapper[4820]: E0221 07:30:22.697763 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:30:33 crc kubenswrapper[4820]: I0221 07:30:33.696574 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:30:33 crc kubenswrapper[4820]: E0221 07:30:33.697725 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:30:45 crc kubenswrapper[4820]: I0221 07:30:45.704299 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:30:45 crc kubenswrapper[4820]: E0221 07:30:45.705309 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:30:57 crc kubenswrapper[4820]: I0221 07:30:57.697471 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:30:57 crc kubenswrapper[4820]: E0221 07:30:57.699907 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:31:09 crc kubenswrapper[4820]: I0221 07:31:09.697857 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:31:09 crc kubenswrapper[4820]: E0221 07:31:09.698934 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:31:21 crc kubenswrapper[4820]: I0221 07:31:21.696961 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:31:21 crc kubenswrapper[4820]: E0221 07:31:21.698311 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:31:34 crc kubenswrapper[4820]: I0221 07:31:34.697239 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:31:34 crc kubenswrapper[4820]: E0221 07:31:34.698315 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:31:46 crc kubenswrapper[4820]: I0221 07:31:46.697298 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:31:46 crc kubenswrapper[4820]: E0221 07:31:46.698478 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:31:57 crc kubenswrapper[4820]: I0221 07:31:57.697277 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:31:57 crc kubenswrapper[4820]: E0221 07:31:57.698020 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:32:12 crc kubenswrapper[4820]: I0221 07:32:12.697039 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:32:12 crc kubenswrapper[4820]: E0221 07:32:12.697741 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:32:25 crc kubenswrapper[4820]: I0221 07:32:25.701223 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:32:26 crc kubenswrapper[4820]: I0221 07:32:26.455895 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"490b6ac30893f64f1d044dba3e009d0873f7adb91481587baa0c783e1b2f2af5"} Feb 21 07:33:25 crc kubenswrapper[4820]: I0221 07:33:25.966355 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ddb8j"] Feb 21 07:33:25 crc kubenswrapper[4820]: E0221 07:33:25.967813 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9686bf95-baf7-4066-8769-66f168be0215" containerName="collect-profiles" Feb 21 07:33:25 crc kubenswrapper[4820]: I0221 07:33:25.967836 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9686bf95-baf7-4066-8769-66f168be0215" containerName="collect-profiles" Feb 21 07:33:25 crc kubenswrapper[4820]: I0221 07:33:25.968458 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9686bf95-baf7-4066-8769-66f168be0215" containerName="collect-profiles" Feb 21 07:33:25 crc kubenswrapper[4820]: I0221 07:33:25.976087 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:25 crc kubenswrapper[4820]: I0221 07:33:25.992197 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddb8j"] Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.089403 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppggw\" (UniqueName: \"kubernetes.io/projected/aee28481-4767-447d-97ea-0c0a44652ec4-kube-api-access-ppggw\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.089547 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-catalog-content\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.089602 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-utilities\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.190671 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppggw\" (UniqueName: \"kubernetes.io/projected/aee28481-4767-447d-97ea-0c0a44652ec4-kube-api-access-ppggw\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.190763 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-catalog-content\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.190792 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-utilities\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.191285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-utilities\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.191485 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-catalog-content\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.216033 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppggw\" (UniqueName: \"kubernetes.io/projected/aee28481-4767-447d-97ea-0c0a44652ec4-kube-api-access-ppggw\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.320161 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.721209 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddb8j"] Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.936693 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerStarted","Data":"ce634a88568996f1f99baa3f9008633c8002c6661b813babac3389533e88553d"} Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.936738 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerStarted","Data":"800f5073be14762000edf6d05d7997d9f766e39dadc5e37374ca63e0465e3c6c"} Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.938471 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:33:27 crc kubenswrapper[4820]: I0221 07:33:27.945039 4820 generic.go:334] "Generic (PLEG): container finished" podID="aee28481-4767-447d-97ea-0c0a44652ec4" containerID="ce634a88568996f1f99baa3f9008633c8002c6661b813babac3389533e88553d" exitCode=0 Feb 21 07:33:27 crc kubenswrapper[4820]: I0221 07:33:27.945137 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerDied","Data":"ce634a88568996f1f99baa3f9008633c8002c6661b813babac3389533e88553d"} Feb 21 07:33:27 crc kubenswrapper[4820]: I0221 07:33:27.945608 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerStarted","Data":"9b734c4b7bfc0bd8eb4f8bede006aaba55f7a0ed1b3a1d52987d8f2f7062a110"} Feb 21 07:33:28 crc kubenswrapper[4820]: I0221 07:33:28.957180 4820 generic.go:334] "Generic (PLEG): container finished" podID="aee28481-4767-447d-97ea-0c0a44652ec4" containerID="9b734c4b7bfc0bd8eb4f8bede006aaba55f7a0ed1b3a1d52987d8f2f7062a110" exitCode=0 Feb 21 07:33:28 crc kubenswrapper[4820]: I0221 07:33:28.957314 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerDied","Data":"9b734c4b7bfc0bd8eb4f8bede006aaba55f7a0ed1b3a1d52987d8f2f7062a110"} Feb 21 07:33:29 crc kubenswrapper[4820]: I0221 07:33:29.964434 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerStarted","Data":"a32153d3547773141c51f2cb185f7504065f914ef1fcc9ece5a1aae392a7cc05"} Feb 21 07:33:29 crc kubenswrapper[4820]: I0221 07:33:29.988002 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ddb8j" podStartSLOduration=2.38307466 podStartE2EDuration="4.987969802s" podCreationTimestamp="2026-02-21 07:33:25 +0000 UTC" firstStartedPulling="2026-02-21 07:33:26.938294092 +0000 UTC m=+2781.971378290" lastFinishedPulling="2026-02-21 07:33:29.543189244 +0000 UTC m=+2784.576273432" observedRunningTime="2026-02-21 07:33:29.97872409 +0000 UTC m=+2785.011808308" watchObservedRunningTime="2026-02-21 07:33:29.987969802 +0000 UTC m=+2785.021054040" Feb 21 07:33:36 crc kubenswrapper[4820]: I0221 07:33:36.321381 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:36 crc kubenswrapper[4820]: I0221 07:33:36.323736 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:36 crc kubenswrapper[4820]: I0221 07:33:36.395096 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:37 crc kubenswrapper[4820]: I0221 07:33:37.090611 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:37 crc kubenswrapper[4820]: I0221 07:33:37.164475 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ddb8j"] Feb 21 07:33:39 crc kubenswrapper[4820]: I0221 07:33:39.033457 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ddb8j" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="registry-server" containerID="cri-o://a32153d3547773141c51f2cb185f7504065f914ef1fcc9ece5a1aae392a7cc05" gracePeriod=2 Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.045351 4820 generic.go:334] "Generic (PLEG): container finished" podID="aee28481-4767-447d-97ea-0c0a44652ec4" containerID="a32153d3547773141c51f2cb185f7504065f914ef1fcc9ece5a1aae392a7cc05" exitCode=0 Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.045442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerDied","Data":"a32153d3547773141c51f2cb185f7504065f914ef1fcc9ece5a1aae392a7cc05"} Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.613789 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.718517 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-catalog-content\") pod \"aee28481-4767-447d-97ea-0c0a44652ec4\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.718647 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-utilities\") pod \"aee28481-4767-447d-97ea-0c0a44652ec4\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.720462 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppggw\" (UniqueName: \"kubernetes.io/projected/aee28481-4767-447d-97ea-0c0a44652ec4-kube-api-access-ppggw\") pod \"aee28481-4767-447d-97ea-0c0a44652ec4\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.720866 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-utilities" (OuterVolumeSpecName: "utilities") pod "aee28481-4767-447d-97ea-0c0a44652ec4" (UID: "aee28481-4767-447d-97ea-0c0a44652ec4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.721514 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.729436 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aee28481-4767-447d-97ea-0c0a44652ec4-kube-api-access-ppggw" (OuterVolumeSpecName: "kube-api-access-ppggw") pod "aee28481-4767-447d-97ea-0c0a44652ec4" (UID: "aee28481-4767-447d-97ea-0c0a44652ec4"). InnerVolumeSpecName "kube-api-access-ppggw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.822664 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppggw\" (UniqueName: \"kubernetes.io/projected/aee28481-4767-447d-97ea-0c0a44652ec4-kube-api-access-ppggw\") on node \"crc\" DevicePath \"\"" Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.922992 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aee28481-4767-447d-97ea-0c0a44652ec4" (UID: "aee28481-4767-447d-97ea-0c0a44652ec4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.924933 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.057664 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerDied","Data":"800f5073be14762000edf6d05d7997d9f766e39dadc5e37374ca63e0465e3c6c"} Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.057747 4820 scope.go:117] "RemoveContainer" containerID="a32153d3547773141c51f2cb185f7504065f914ef1fcc9ece5a1aae392a7cc05" Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.057779 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.083794 4820 scope.go:117] "RemoveContainer" containerID="9b734c4b7bfc0bd8eb4f8bede006aaba55f7a0ed1b3a1d52987d8f2f7062a110" Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.122894 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ddb8j"] Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.130363 4820 scope.go:117] "RemoveContainer" containerID="ce634a88568996f1f99baa3f9008633c8002c6661b813babac3389533e88553d" Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.135576 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ddb8j"] Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.711571 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" path="/var/lib/kubelet/pods/aee28481-4767-447d-97ea-0c0a44652ec4/volumes" Feb 21 07:34:43 crc kubenswrapper[4820]: I0221 07:34:43.817351 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:34:43 crc kubenswrapper[4820]: I0221 07:34:43.818078 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.801185 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-69wjz"] Feb 21 07:34:49 crc kubenswrapper[4820]: E0221 07:34:49.802208 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="extract-utilities" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.802229 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="extract-utilities" Feb 21 07:34:49 crc kubenswrapper[4820]: E0221 07:34:49.802296 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="extract-content" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.802308 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="extract-content" Feb 21 07:34:49 crc kubenswrapper[4820]: E0221 07:34:49.802334 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="registry-server" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.802347 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="registry-server" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.802556 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="registry-server" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.804357 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.814609 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69wjz"] Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.973714 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-catalog-content\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.973797 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-utilities\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.973874 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2g8x\" (UniqueName: \"kubernetes.io/projected/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-kube-api-access-m2g8x\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.075269 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-catalog-content\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.075355 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-utilities\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.075393 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2g8x\" (UniqueName: \"kubernetes.io/projected/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-kube-api-access-m2g8x\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.075860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-utilities\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.075874 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-catalog-content\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.100651 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2g8x\" (UniqueName: \"kubernetes.io/projected/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-kube-api-access-m2g8x\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.175097 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.661634 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69wjz"] Feb 21 07:34:51 crc kubenswrapper[4820]: I0221 07:34:51.652610 4820 generic.go:334] "Generic (PLEG): container finished" podID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerID="9e6cbdcf98073623c42ebc08a3a9244293f57b950c05c7f4d4a46d72649d7bd4" exitCode=0 Feb 21 07:34:51 crc kubenswrapper[4820]: I0221 07:34:51.652768 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69wjz" event={"ID":"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf","Type":"ContainerDied","Data":"9e6cbdcf98073623c42ebc08a3a9244293f57b950c05c7f4d4a46d72649d7bd4"} Feb 21 07:34:51 crc kubenswrapper[4820]: I0221 07:34:51.653199 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69wjz" event={"ID":"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf","Type":"ContainerStarted","Data":"daf2c081b5e68ff2a466b3c60fde92970208c2de87ccc3cdf34358aa744193e2"} Feb 21 07:34:52 crc kubenswrapper[4820]: I0221 07:34:52.667857 4820 generic.go:334] "Generic (PLEG): container finished" podID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerID="def41a6eec93a17715a687e2008dba6a054262ab233fb3107ab1ad02fe7f9ea0" exitCode=0 Feb 21 07:34:52 crc kubenswrapper[4820]: I0221 07:34:52.667917 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69wjz" event={"ID":"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf","Type":"ContainerDied","Data":"def41a6eec93a17715a687e2008dba6a054262ab233fb3107ab1ad02fe7f9ea0"} Feb 21 07:34:53 crc kubenswrapper[4820]: I0221 07:34:53.681943 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69wjz" event={"ID":"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf","Type":"ContainerStarted","Data":"2a132b162374d1bf952b4a2206ecdea043fde586e261063d25366c794555b053"} Feb 21 07:35:00 crc kubenswrapper[4820]: I0221 07:35:00.175512 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:35:00 crc kubenswrapper[4820]: I0221 07:35:00.176202 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:35:00 crc kubenswrapper[4820]: I0221 07:35:00.258550 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:35:00 crc kubenswrapper[4820]: I0221 07:35:00.283123 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-69wjz" podStartSLOduration=9.886325012 podStartE2EDuration="11.283094047s" podCreationTimestamp="2026-02-21 07:34:49 +0000 UTC" firstStartedPulling="2026-02-21 07:34:51.655960057 +0000 UTC m=+2866.689044295" lastFinishedPulling="2026-02-21 07:34:53.052729112 +0000 UTC m=+2868.085813330" observedRunningTime="2026-02-21 07:34:53.708473643 +0000 UTC m=+2868.741557882" watchObservedRunningTime="2026-02-21 07:35:00.283094047 +0000 UTC m=+2875.316178275" Feb 21 07:35:00 crc kubenswrapper[4820]: I0221 07:35:00.814728 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:35:00 crc kubenswrapper[4820]: I0221 07:35:00.892023 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69wjz"] Feb 21 07:35:02 crc kubenswrapper[4820]: I0221 07:35:02.757113 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-69wjz" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="registry-server" containerID="cri-o://2a132b162374d1bf952b4a2206ecdea043fde586e261063d25366c794555b053" gracePeriod=2 Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.767951 4820 generic.go:334] "Generic (PLEG): container finished" podID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerID="2a132b162374d1bf952b4a2206ecdea043fde586e261063d25366c794555b053" exitCode=0 Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.768021 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69wjz" event={"ID":"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf","Type":"ContainerDied","Data":"2a132b162374d1bf952b4a2206ecdea043fde586e261063d25366c794555b053"} Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.768444 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69wjz" event={"ID":"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf","Type":"ContainerDied","Data":"daf2c081b5e68ff2a466b3c60fde92970208c2de87ccc3cdf34358aa744193e2"} Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.768478 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf2c081b5e68ff2a466b3c60fde92970208c2de87ccc3cdf34358aa744193e2" Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.769633 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.913249 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2g8x\" (UniqueName: \"kubernetes.io/projected/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-kube-api-access-m2g8x\") pod \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.913327 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-utilities\") pod \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.913451 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-catalog-content\") pod \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.914978 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-utilities" (OuterVolumeSpecName: "utilities") pod "47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" (UID: "47cc3fdc-9559-4ca5-940a-40b6efdcd5cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.920032 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-kube-api-access-m2g8x" (OuterVolumeSpecName: "kube-api-access-m2g8x") pod "47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" (UID: "47cc3fdc-9559-4ca5-940a-40b6efdcd5cf"). InnerVolumeSpecName "kube-api-access-m2g8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.967393 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" (UID: "47cc3fdc-9559-4ca5-940a-40b6efdcd5cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:35:04 crc kubenswrapper[4820]: I0221 07:35:04.015606 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2g8x\" (UniqueName: \"kubernetes.io/projected/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-kube-api-access-m2g8x\") on node \"crc\" DevicePath \"\"" Feb 21 07:35:04 crc kubenswrapper[4820]: I0221 07:35:04.015823 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:35:04 crc kubenswrapper[4820]: I0221 07:35:04.015910 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:35:04 crc kubenswrapper[4820]: I0221 07:35:04.778661 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:35:04 crc kubenswrapper[4820]: I0221 07:35:04.832575 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69wjz"] Feb 21 07:35:04 crc kubenswrapper[4820]: I0221 07:35:04.839317 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-69wjz"] Feb 21 07:35:05 crc kubenswrapper[4820]: I0221 07:35:05.711986 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" path="/var/lib/kubelet/pods/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf/volumes" Feb 21 07:35:13 crc kubenswrapper[4820]: I0221 07:35:13.816030 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:35:13 crc kubenswrapper[4820]: I0221 07:35:13.816716 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:35:43 crc kubenswrapper[4820]: I0221 07:35:43.816264 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:35:43 crc kubenswrapper[4820]: I0221 07:35:43.816941 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:35:43 crc kubenswrapper[4820]: I0221 07:35:43.817027 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:35:43 crc kubenswrapper[4820]: I0221 07:35:43.818048 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"490b6ac30893f64f1d044dba3e009d0873f7adb91481587baa0c783e1b2f2af5"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:35:43 crc kubenswrapper[4820]: I0221 07:35:43.818158 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://490b6ac30893f64f1d044dba3e009d0873f7adb91481587baa0c783e1b2f2af5" gracePeriod=600 Feb 21 07:35:44 crc kubenswrapper[4820]: I0221 07:35:44.123835 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="490b6ac30893f64f1d044dba3e009d0873f7adb91481587baa0c783e1b2f2af5" exitCode=0 Feb 21 07:35:44 crc kubenswrapper[4820]: I0221 07:35:44.123911 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"490b6ac30893f64f1d044dba3e009d0873f7adb91481587baa0c783e1b2f2af5"} Feb 21 07:35:44 crc kubenswrapper[4820]: I0221 07:35:44.124039 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:35:45 crc kubenswrapper[4820]: I0221 07:35:45.136186 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973"} Feb 21 07:38:13 crc kubenswrapper[4820]: I0221 07:38:13.816063 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:38:13 crc kubenswrapper[4820]: I0221 07:38:13.816710 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:38:43 crc kubenswrapper[4820]: I0221 07:38:43.816628 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:38:43 crc kubenswrapper[4820]: I0221 07:38:43.817164 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:39:13 crc kubenswrapper[4820]: I0221 07:39:13.816514 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:39:13 crc kubenswrapper[4820]: I0221 07:39:13.817065 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:39:13 crc kubenswrapper[4820]: I0221 07:39:13.817115 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:39:13 crc kubenswrapper[4820]: I0221 07:39:13.817992 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:39:13 crc kubenswrapper[4820]: I0221 07:39:13.818068 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" gracePeriod=600 Feb 21 07:39:13 crc kubenswrapper[4820]: E0221 07:39:13.966616 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:39:14 crc kubenswrapper[4820]: I0221 07:39:14.043204 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973"} Feb 21 07:39:14 crc kubenswrapper[4820]: I0221 07:39:14.043250 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" exitCode=0 Feb 21 07:39:14 crc kubenswrapper[4820]: I0221 07:39:14.043290 4820 scope.go:117] "RemoveContainer" containerID="490b6ac30893f64f1d044dba3e009d0873f7adb91481587baa0c783e1b2f2af5" Feb 21 07:39:14 crc kubenswrapper[4820]: I0221 07:39:14.043732 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:39:14 crc kubenswrapper[4820]: E0221 07:39:14.043976 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:39:28 crc kubenswrapper[4820]: I0221 07:39:28.696513 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:39:28 crc kubenswrapper[4820]: E0221 07:39:28.697469 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:39:42 crc kubenswrapper[4820]: I0221 07:39:42.697125 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:39:42 crc kubenswrapper[4820]: E0221 07:39:42.698170 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:39:53 crc kubenswrapper[4820]: I0221 07:39:53.697655 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:39:53 crc kubenswrapper[4820]: E0221 07:39:53.698848 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:40:05 crc kubenswrapper[4820]: I0221 07:40:05.701962 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:40:05 crc kubenswrapper[4820]: E0221 07:40:05.702890 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.534523 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7kl7k"] Feb 21 07:40:16 crc kubenswrapper[4820]: E0221 07:40:16.535623 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="registry-server" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.535646 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="registry-server" Feb 21 07:40:16 crc kubenswrapper[4820]: E0221 07:40:16.535670 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="extract-utilities" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.535683 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="extract-utilities" Feb 21 07:40:16 crc kubenswrapper[4820]: E0221 07:40:16.535756 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="extract-content" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.535773 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="extract-content" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.536072 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="registry-server" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.537751 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.557194 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7kl7k"] Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.686279 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd59v\" (UniqueName: \"kubernetes.io/projected/a87b0b35-2855-41c6-be2a-02ec21e4f76c-kube-api-access-jd59v\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.686352 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-catalog-content\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.686392 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-utilities\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.723661 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l84vx"] Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.724940 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.746714 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l84vx"] Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.787427 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-utilities\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.787565 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd59v\" (UniqueName: \"kubernetes.io/projected/a87b0b35-2855-41c6-be2a-02ec21e4f76c-kube-api-access-jd59v\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.787614 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-catalog-content\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.787933 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-utilities\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.788019 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-catalog-content\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.809125 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd59v\" (UniqueName: \"kubernetes.io/projected/a87b0b35-2855-41c6-be2a-02ec21e4f76c-kube-api-access-jd59v\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.872069 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.888649 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcrs7\" (UniqueName: \"kubernetes.io/projected/a7f39e73-ac13-401b-8b13-6b43964609cf-kube-api-access-mcrs7\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.888704 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-utilities\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.888743 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-catalog-content\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.991294 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-catalog-content\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.991652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcrs7\" (UniqueName: \"kubernetes.io/projected/a7f39e73-ac13-401b-8b13-6b43964609cf-kube-api-access-mcrs7\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.991686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-utilities\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.992427 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-catalog-content\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:16.995119 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-utilities\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.025382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcrs7\" (UniqueName: \"kubernetes.io/projected/a7f39e73-ac13-401b-8b13-6b43964609cf-kube-api-access-mcrs7\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.086725 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.345912 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7kl7k"] Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.553298 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l84vx"] Feb 21 07:40:17 crc kubenswrapper[4820]: W0221 07:40:17.575807 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7f39e73_ac13_401b_8b13_6b43964609cf.slice/crio-b1ed9a17504a7d60c3cf22c1fe22d6c87f221bc8dba2bac68fbbf21c748a7b8e WatchSource:0}: Error finding container b1ed9a17504a7d60c3cf22c1fe22d6c87f221bc8dba2bac68fbbf21c748a7b8e: Status 404 returned error can't find the container with id b1ed9a17504a7d60c3cf22c1fe22d6c87f221bc8dba2bac68fbbf21c748a7b8e Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.686989 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l84vx" event={"ID":"a7f39e73-ac13-401b-8b13-6b43964609cf","Type":"ContainerStarted","Data":"b1ed9a17504a7d60c3cf22c1fe22d6c87f221bc8dba2bac68fbbf21c748a7b8e"} Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.689691 4820 generic.go:334] "Generic (PLEG): container finished" podID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerID="c39f50a678425e9cc0fcddc26b8691457a1645406c597144548b9a01c6ce923c" exitCode=0 Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.689775 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kl7k" event={"ID":"a87b0b35-2855-41c6-be2a-02ec21e4f76c","Type":"ContainerDied","Data":"c39f50a678425e9cc0fcddc26b8691457a1645406c597144548b9a01c6ce923c"} Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.689828 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kl7k" event={"ID":"a87b0b35-2855-41c6-be2a-02ec21e4f76c","Type":"ContainerStarted","Data":"635db501838d4e42233fed604e1328a175ce679ed6d00e8bd80c7b6b2b676d72"} Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.691758 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:40:18 crc kubenswrapper[4820]: I0221 07:40:18.718192 4820 generic.go:334] "Generic (PLEG): container finished" podID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerID="097e6b83ef8daa21dbd26a23bbbff42fe5299e2430ed1d3e0afdfd1e974e37c8" exitCode=0 Feb 21 07:40:18 crc kubenswrapper[4820]: I0221 07:40:18.718654 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l84vx" event={"ID":"a7f39e73-ac13-401b-8b13-6b43964609cf","Type":"ContainerDied","Data":"097e6b83ef8daa21dbd26a23bbbff42fe5299e2430ed1d3e0afdfd1e974e37c8"} Feb 21 07:40:18 crc kubenswrapper[4820]: I0221 07:40:18.726147 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kl7k" event={"ID":"a87b0b35-2855-41c6-be2a-02ec21e4f76c","Type":"ContainerStarted","Data":"6e35621c7845230e7db05bdce58acb2fe25ff4ba7283b024c2f73621e9e64005"} Feb 21 07:40:19 crc kubenswrapper[4820]: I0221 07:40:19.738614 4820 generic.go:334] "Generic (PLEG): container finished" podID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerID="6e35621c7845230e7db05bdce58acb2fe25ff4ba7283b024c2f73621e9e64005" exitCode=0 Feb 21 07:40:19 crc kubenswrapper[4820]: I0221 07:40:19.738701 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kl7k" event={"ID":"a87b0b35-2855-41c6-be2a-02ec21e4f76c","Type":"ContainerDied","Data":"6e35621c7845230e7db05bdce58acb2fe25ff4ba7283b024c2f73621e9e64005"} Feb 21 07:40:19 crc kubenswrapper[4820]: I0221 07:40:19.743319 4820 generic.go:334] "Generic (PLEG): container finished" podID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerID="5de55d91ad6a8e889320b2deff3ae550b0877b49c4dea85e11f0079996260448" exitCode=0 Feb 21 07:40:19 crc kubenswrapper[4820]: I0221 07:40:19.743352 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l84vx" event={"ID":"a7f39e73-ac13-401b-8b13-6b43964609cf","Type":"ContainerDied","Data":"5de55d91ad6a8e889320b2deff3ae550b0877b49c4dea85e11f0079996260448"} Feb 21 07:40:20 crc kubenswrapper[4820]: I0221 07:40:20.696789 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:40:20 crc kubenswrapper[4820]: E0221 07:40:20.697542 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:40:20 crc kubenswrapper[4820]: I0221 07:40:20.753712 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kl7k" event={"ID":"a87b0b35-2855-41c6-be2a-02ec21e4f76c","Type":"ContainerStarted","Data":"ecf07856d3b4f50ed0834670df00de7e386b44bafd0ea8f709b6d99d023a98ce"} Feb 21 07:40:20 crc kubenswrapper[4820]: I0221 07:40:20.756579 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l84vx" event={"ID":"a7f39e73-ac13-401b-8b13-6b43964609cf","Type":"ContainerStarted","Data":"e74d4968e76f0baf85708fd8be7c66b5ddb9ed59306f224a9f0d79784c0b424d"} Feb 21 07:40:20 crc kubenswrapper[4820]: I0221 07:40:20.783162 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7kl7k" podStartSLOduration=2.3739966949999998 podStartE2EDuration="4.783141431s" podCreationTimestamp="2026-02-21 07:40:16 +0000 UTC" firstStartedPulling="2026-02-21 07:40:17.691164628 +0000 UTC m=+3192.724248876" lastFinishedPulling="2026-02-21 07:40:20.100309414 +0000 UTC m=+3195.133393612" observedRunningTime="2026-02-21 07:40:20.77936892 +0000 UTC m=+3195.812453128" watchObservedRunningTime="2026-02-21 07:40:20.783141431 +0000 UTC m=+3195.816225639" Feb 21 07:40:20 crc kubenswrapper[4820]: I0221 07:40:20.809876 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l84vx" podStartSLOduration=3.250280036 podStartE2EDuration="4.809854517s" podCreationTimestamp="2026-02-21 07:40:16 +0000 UTC" firstStartedPulling="2026-02-21 07:40:18.723193366 +0000 UTC m=+3193.756277594" lastFinishedPulling="2026-02-21 07:40:20.282767847 +0000 UTC m=+3195.315852075" observedRunningTime="2026-02-21 07:40:20.803770442 +0000 UTC m=+3195.836854650" watchObservedRunningTime="2026-02-21 07:40:20.809854517 +0000 UTC m=+3195.842938725" Feb 21 07:40:26 crc kubenswrapper[4820]: I0221 07:40:26.873077 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:26 crc kubenswrapper[4820]: I0221 07:40:26.873618 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:26 crc kubenswrapper[4820]: I0221 07:40:26.938541 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:27 crc kubenswrapper[4820]: I0221 07:40:27.087576 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:27 crc kubenswrapper[4820]: I0221 07:40:27.087643 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:27 crc kubenswrapper[4820]: I0221 07:40:27.159772 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:27 crc kubenswrapper[4820]: I0221 07:40:27.864669 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:27 crc kubenswrapper[4820]: I0221 07:40:27.874850 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:29 crc kubenswrapper[4820]: I0221 07:40:29.327634 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l84vx"] Feb 21 07:40:29 crc kubenswrapper[4820]: I0221 07:40:29.833996 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l84vx" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="registry-server" containerID="cri-o://e74d4968e76f0baf85708fd8be7c66b5ddb9ed59306f224a9f0d79784c0b424d" gracePeriod=2 Feb 21 07:40:30 crc kubenswrapper[4820]: I0221 07:40:30.324992 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7kl7k"] Feb 21 07:40:30 crc kubenswrapper[4820]: I0221 07:40:30.325512 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7kl7k" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="registry-server" containerID="cri-o://ecf07856d3b4f50ed0834670df00de7e386b44bafd0ea8f709b6d99d023a98ce" gracePeriod=2 Feb 21 07:40:30 crc kubenswrapper[4820]: I0221 07:40:30.845786 4820 generic.go:334] "Generic (PLEG): container finished" podID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerID="ecf07856d3b4f50ed0834670df00de7e386b44bafd0ea8f709b6d99d023a98ce" exitCode=0 Feb 21 07:40:30 crc kubenswrapper[4820]: I0221 07:40:30.845897 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kl7k" event={"ID":"a87b0b35-2855-41c6-be2a-02ec21e4f76c","Type":"ContainerDied","Data":"ecf07856d3b4f50ed0834670df00de7e386b44bafd0ea8f709b6d99d023a98ce"} Feb 21 07:40:30 crc kubenswrapper[4820]: I0221 07:40:30.849664 4820 generic.go:334] "Generic (PLEG): container finished" podID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerID="e74d4968e76f0baf85708fd8be7c66b5ddb9ed59306f224a9f0d79784c0b424d" exitCode=0 Feb 21 07:40:30 crc kubenswrapper[4820]: I0221 07:40:30.849752 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l84vx" event={"ID":"a7f39e73-ac13-401b-8b13-6b43964609cf","Type":"ContainerDied","Data":"e74d4968e76f0baf85708fd8be7c66b5ddb9ed59306f224a9f0d79784c0b424d"} Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.215287 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.319073 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd59v\" (UniqueName: \"kubernetes.io/projected/a87b0b35-2855-41c6-be2a-02ec21e4f76c-kube-api-access-jd59v\") pod \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.319173 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-utilities\") pod \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.319206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-catalog-content\") pod \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.320047 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-utilities" (OuterVolumeSpecName: "utilities") pod "a87b0b35-2855-41c6-be2a-02ec21e4f76c" (UID: "a87b0b35-2855-41c6-be2a-02ec21e4f76c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.322631 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.324746 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a87b0b35-2855-41c6-be2a-02ec21e4f76c-kube-api-access-jd59v" (OuterVolumeSpecName: "kube-api-access-jd59v") pod "a87b0b35-2855-41c6-be2a-02ec21e4f76c" (UID: "a87b0b35-2855-41c6-be2a-02ec21e4f76c"). InnerVolumeSpecName "kube-api-access-jd59v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.387705 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a87b0b35-2855-41c6-be2a-02ec21e4f76c" (UID: "a87b0b35-2855-41c6-be2a-02ec21e4f76c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.419997 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcrs7\" (UniqueName: \"kubernetes.io/projected/a7f39e73-ac13-401b-8b13-6b43964609cf-kube-api-access-mcrs7\") pod \"a7f39e73-ac13-401b-8b13-6b43964609cf\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.420312 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-utilities\") pod \"a7f39e73-ac13-401b-8b13-6b43964609cf\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.420509 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-catalog-content\") pod \"a7f39e73-ac13-401b-8b13-6b43964609cf\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.421072 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.421178 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.421291 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd59v\" (UniqueName: \"kubernetes.io/projected/a87b0b35-2855-41c6-be2a-02ec21e4f76c-kube-api-access-jd59v\") on node \"crc\" DevicePath \"\"" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.421303 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-utilities" (OuterVolumeSpecName: "utilities") pod "a7f39e73-ac13-401b-8b13-6b43964609cf" (UID: "a7f39e73-ac13-401b-8b13-6b43964609cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.423362 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f39e73-ac13-401b-8b13-6b43964609cf-kube-api-access-mcrs7" (OuterVolumeSpecName: "kube-api-access-mcrs7") pod "a7f39e73-ac13-401b-8b13-6b43964609cf" (UID: "a7f39e73-ac13-401b-8b13-6b43964609cf"). InnerVolumeSpecName "kube-api-access-mcrs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.449572 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7f39e73-ac13-401b-8b13-6b43964609cf" (UID: "a7f39e73-ac13-401b-8b13-6b43964609cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.522489 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcrs7\" (UniqueName: \"kubernetes.io/projected/a7f39e73-ac13-401b-8b13-6b43964609cf-kube-api-access-mcrs7\") on node \"crc\" DevicePath \"\"" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.522545 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.522566 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.861051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kl7k" event={"ID":"a87b0b35-2855-41c6-be2a-02ec21e4f76c","Type":"ContainerDied","Data":"635db501838d4e42233fed604e1328a175ce679ed6d00e8bd80c7b6b2b676d72"} Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.861086 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.861122 4820 scope.go:117] "RemoveContainer" containerID="ecf07856d3b4f50ed0834670df00de7e386b44bafd0ea8f709b6d99d023a98ce" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.866630 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l84vx" event={"ID":"a7f39e73-ac13-401b-8b13-6b43964609cf","Type":"ContainerDied","Data":"b1ed9a17504a7d60c3cf22c1fe22d6c87f221bc8dba2bac68fbbf21c748a7b8e"} Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.866851 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.895807 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7kl7k"] Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.923307 4820 scope.go:117] "RemoveContainer" containerID="6e35621c7845230e7db05bdce58acb2fe25ff4ba7283b024c2f73621e9e64005" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.928431 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7kl7k"] Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.937381 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l84vx"] Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.942271 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l84vx"] Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.945355 4820 scope.go:117] "RemoveContainer" containerID="c39f50a678425e9cc0fcddc26b8691457a1645406c597144548b9a01c6ce923c" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.965464 4820 scope.go:117] "RemoveContainer" containerID="e74d4968e76f0baf85708fd8be7c66b5ddb9ed59306f224a9f0d79784c0b424d" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.983723 4820 scope.go:117] "RemoveContainer" containerID="5de55d91ad6a8e889320b2deff3ae550b0877b49c4dea85e11f0079996260448" Feb 21 07:40:32 crc kubenswrapper[4820]: I0221 07:40:32.003134 4820 scope.go:117] "RemoveContainer" containerID="097e6b83ef8daa21dbd26a23bbbff42fe5299e2430ed1d3e0afdfd1e974e37c8" Feb 21 07:40:33 crc kubenswrapper[4820]: I0221 07:40:33.710901 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" path="/var/lib/kubelet/pods/a7f39e73-ac13-401b-8b13-6b43964609cf/volumes" Feb 21 07:40:33 crc kubenswrapper[4820]: I0221 07:40:33.712631 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" path="/var/lib/kubelet/pods/a87b0b35-2855-41c6-be2a-02ec21e4f76c/volumes" Feb 21 07:40:35 crc kubenswrapper[4820]: I0221 07:40:35.701690 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:40:35 crc kubenswrapper[4820]: E0221 07:40:35.701948 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:40:49 crc kubenswrapper[4820]: I0221 07:40:49.696296 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:40:49 crc kubenswrapper[4820]: E0221 07:40:49.696991 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:41:00 crc kubenswrapper[4820]: I0221 07:41:00.696855 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:41:00 crc kubenswrapper[4820]: E0221 07:41:00.698940 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:41:13 crc kubenswrapper[4820]: I0221 07:41:13.696477 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:41:13 crc kubenswrapper[4820]: E0221 07:41:13.697513 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:41:17 crc kubenswrapper[4820]: I0221 07:41:17.277117 4820 scope.go:117] "RemoveContainer" containerID="2a132b162374d1bf952b4a2206ecdea043fde586e261063d25366c794555b053" Feb 21 07:41:17 crc kubenswrapper[4820]: I0221 07:41:17.314345 4820 scope.go:117] "RemoveContainer" containerID="def41a6eec93a17715a687e2008dba6a054262ab233fb3107ab1ad02fe7f9ea0" Feb 21 07:41:17 crc kubenswrapper[4820]: I0221 07:41:17.353466 4820 scope.go:117] "RemoveContainer" containerID="9e6cbdcf98073623c42ebc08a3a9244293f57b950c05c7f4d4a46d72649d7bd4" Feb 21 07:41:24 crc kubenswrapper[4820]: I0221 07:41:24.697209 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:41:24 crc kubenswrapper[4820]: E0221 07:41:24.698086 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:41:36 crc kubenswrapper[4820]: I0221 07:41:36.696997 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:41:36 crc kubenswrapper[4820]: E0221 07:41:36.697756 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:41:49 crc kubenswrapper[4820]: I0221 07:41:49.697415 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:41:49 crc kubenswrapper[4820]: E0221 07:41:49.698466 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:42:01 crc kubenswrapper[4820]: I0221 07:42:01.696766 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:42:01 crc kubenswrapper[4820]: E0221 07:42:01.697925 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:42:14 crc kubenswrapper[4820]: I0221 07:42:14.697173 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:42:14 crc kubenswrapper[4820]: E0221 07:42:14.698189 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:42:29 crc kubenswrapper[4820]: I0221 07:42:29.697218 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:42:29 crc kubenswrapper[4820]: E0221 07:42:29.699365 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:42:42 crc kubenswrapper[4820]: I0221 07:42:42.697410 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:42:42 crc kubenswrapper[4820]: E0221 07:42:42.698428 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:42:54 crc kubenswrapper[4820]: I0221 07:42:54.697375 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:42:54 crc kubenswrapper[4820]: E0221 07:42:54.698538 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:43:07 crc kubenswrapper[4820]: I0221 07:43:07.697492 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:43:07 crc kubenswrapper[4820]: E0221 07:43:07.698292 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:43:22 crc kubenswrapper[4820]: I0221 07:43:22.719025 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:43:22 crc kubenswrapper[4820]: E0221 07:43:22.720356 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:43:33 crc kubenswrapper[4820]: I0221 07:43:33.696489 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:43:33 crc kubenswrapper[4820]: E0221 07:43:33.697709 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:43:45 crc kubenswrapper[4820]: I0221 07:43:45.705470 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:43:45 crc kubenswrapper[4820]: E0221 07:43:45.706623 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.259882 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6ggfn"] Feb 21 07:43:47 crc kubenswrapper[4820]: E0221 07:43:47.260367 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="extract-utilities" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.260395 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="extract-utilities" Feb 21 07:43:47 crc kubenswrapper[4820]: E0221 07:43:47.260428 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="extract-utilities" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.260440 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="extract-utilities" Feb 21 07:43:47 crc kubenswrapper[4820]: E0221 07:43:47.260481 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="extract-content" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.260495 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="extract-content" Feb 21 07:43:47 crc kubenswrapper[4820]: E0221 07:43:47.260509 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="extract-content" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.260521 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="extract-content" Feb 21 07:43:47 crc kubenswrapper[4820]: E0221 07:43:47.260547 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="registry-server" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.260558 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="registry-server" Feb 21 07:43:47 crc kubenswrapper[4820]: E0221 07:43:47.260706 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="registry-server" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.260722 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="registry-server" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.261000 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="registry-server" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.261044 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="registry-server" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.263385 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.281073 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ggfn"] Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.433967 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-utilities\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.434011 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhd28\" (UniqueName: \"kubernetes.io/projected/59ff409a-d483-413d-8549-862ec2f9da1a-kube-api-access-fhd28\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.434053 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-catalog-content\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.535714 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-utilities\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.535775 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhd28\" (UniqueName: \"kubernetes.io/projected/59ff409a-d483-413d-8549-862ec2f9da1a-kube-api-access-fhd28\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.535833 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-catalog-content\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.536538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-catalog-content\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.536556 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-utilities\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.577160 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhd28\" (UniqueName: \"kubernetes.io/projected/59ff409a-d483-413d-8549-862ec2f9da1a-kube-api-access-fhd28\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.595468 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:48 crc kubenswrapper[4820]: I0221 07:43:48.095598 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ggfn"] Feb 21 07:43:48 crc kubenswrapper[4820]: I0221 07:43:48.457532 4820 generic.go:334] "Generic (PLEG): container finished" podID="59ff409a-d483-413d-8549-862ec2f9da1a" containerID="77b047163f233d4ace28a113a119d191feb8fb1886fc5f474ac7fe63f3a20b7f" exitCode=0 Feb 21 07:43:48 crc kubenswrapper[4820]: I0221 07:43:48.457581 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ggfn" event={"ID":"59ff409a-d483-413d-8549-862ec2f9da1a","Type":"ContainerDied","Data":"77b047163f233d4ace28a113a119d191feb8fb1886fc5f474ac7fe63f3a20b7f"} Feb 21 07:43:48 crc kubenswrapper[4820]: I0221 07:43:48.457608 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ggfn" event={"ID":"59ff409a-d483-413d-8549-862ec2f9da1a","Type":"ContainerStarted","Data":"ed32db600b8711ac0a13c62d82b2a4ab2eac4d7bb3074bcdeca00cedd2562296"} Feb 21 07:43:50 crc kubenswrapper[4820]: I0221 07:43:50.473603 4820 generic.go:334] "Generic (PLEG): container finished" podID="59ff409a-d483-413d-8549-862ec2f9da1a" containerID="82b50520e8632e5dc0a42266b5504d8c61052d84efbaef88fb9a0953f51f4fc9" exitCode=0 Feb 21 07:43:50 crc kubenswrapper[4820]: I0221 07:43:50.473734 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ggfn" event={"ID":"59ff409a-d483-413d-8549-862ec2f9da1a","Type":"ContainerDied","Data":"82b50520e8632e5dc0a42266b5504d8c61052d84efbaef88fb9a0953f51f4fc9"} Feb 21 07:43:51 crc kubenswrapper[4820]: I0221 07:43:51.484481 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ggfn" event={"ID":"59ff409a-d483-413d-8549-862ec2f9da1a","Type":"ContainerStarted","Data":"6eb19fdc65388d69c448ecb1303b36ef286f4072c66701c59cc2c0bc61fca3bf"} Feb 21 07:43:51 crc kubenswrapper[4820]: I0221 07:43:51.514097 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6ggfn" podStartSLOduration=2.100494225 podStartE2EDuration="4.514073671s" podCreationTimestamp="2026-02-21 07:43:47 +0000 UTC" firstStartedPulling="2026-02-21 07:43:48.45904251 +0000 UTC m=+3403.492126708" lastFinishedPulling="2026-02-21 07:43:50.872621946 +0000 UTC m=+3405.905706154" observedRunningTime="2026-02-21 07:43:51.509338893 +0000 UTC m=+3406.542423111" watchObservedRunningTime="2026-02-21 07:43:51.514073671 +0000 UTC m=+3406.547157899" Feb 21 07:43:56 crc kubenswrapper[4820]: I0221 07:43:56.696211 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:43:56 crc kubenswrapper[4820]: E0221 07:43:56.696756 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:43:57 crc kubenswrapper[4820]: I0221 07:43:57.595621 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:57 crc kubenswrapper[4820]: I0221 07:43:57.595980 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:57 crc kubenswrapper[4820]: I0221 07:43:57.637119 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:58 crc kubenswrapper[4820]: I0221 07:43:58.577371 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:58 crc kubenswrapper[4820]: I0221 07:43:58.639699 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ggfn"] Feb 21 07:44:00 crc kubenswrapper[4820]: I0221 07:44:00.552407 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6ggfn" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="registry-server" containerID="cri-o://6eb19fdc65388d69c448ecb1303b36ef286f4072c66701c59cc2c0bc61fca3bf" gracePeriod=2 Feb 21 07:44:01 crc kubenswrapper[4820]: I0221 07:44:01.571192 4820 generic.go:334] "Generic (PLEG): container finished" podID="59ff409a-d483-413d-8549-862ec2f9da1a" containerID="6eb19fdc65388d69c448ecb1303b36ef286f4072c66701c59cc2c0bc61fca3bf" exitCode=0 Feb 21 07:44:01 crc kubenswrapper[4820]: I0221 07:44:01.571308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ggfn" event={"ID":"59ff409a-d483-413d-8549-862ec2f9da1a","Type":"ContainerDied","Data":"6eb19fdc65388d69c448ecb1303b36ef286f4072c66701c59cc2c0bc61fca3bf"} Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.172150 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.307587 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-utilities\") pod \"59ff409a-d483-413d-8549-862ec2f9da1a\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.307783 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhd28\" (UniqueName: \"kubernetes.io/projected/59ff409a-d483-413d-8549-862ec2f9da1a-kube-api-access-fhd28\") pod \"59ff409a-d483-413d-8549-862ec2f9da1a\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.307898 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-catalog-content\") pod \"59ff409a-d483-413d-8549-862ec2f9da1a\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.308567 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-utilities" (OuterVolumeSpecName: "utilities") pod "59ff409a-d483-413d-8549-862ec2f9da1a" (UID: "59ff409a-d483-413d-8549-862ec2f9da1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.314522 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59ff409a-d483-413d-8549-862ec2f9da1a-kube-api-access-fhd28" (OuterVolumeSpecName: "kube-api-access-fhd28") pod "59ff409a-d483-413d-8549-862ec2f9da1a" (UID: "59ff409a-d483-413d-8549-862ec2f9da1a"). InnerVolumeSpecName "kube-api-access-fhd28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.409575 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhd28\" (UniqueName: \"kubernetes.io/projected/59ff409a-d483-413d-8549-862ec2f9da1a-kube-api-access-fhd28\") on node \"crc\" DevicePath \"\"" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.409623 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.456453 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59ff409a-d483-413d-8549-862ec2f9da1a" (UID: "59ff409a-d483-413d-8549-862ec2f9da1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.510765 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.586532 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ggfn" event={"ID":"59ff409a-d483-413d-8549-862ec2f9da1a","Type":"ContainerDied","Data":"ed32db600b8711ac0a13c62d82b2a4ab2eac4d7bb3074bcdeca00cedd2562296"} Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.586604 4820 scope.go:117] "RemoveContainer" containerID="6eb19fdc65388d69c448ecb1303b36ef286f4072c66701c59cc2c0bc61fca3bf" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.586629 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.619315 4820 scope.go:117] "RemoveContainer" containerID="82b50520e8632e5dc0a42266b5504d8c61052d84efbaef88fb9a0953f51f4fc9" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.638567 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ggfn"] Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.645769 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6ggfn"] Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.667883 4820 scope.go:117] "RemoveContainer" containerID="77b047163f233d4ace28a113a119d191feb8fb1886fc5f474ac7fe63f3a20b7f" Feb 21 07:44:03 crc kubenswrapper[4820]: I0221 07:44:03.706563 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" path="/var/lib/kubelet/pods/59ff409a-d483-413d-8549-862ec2f9da1a/volumes" Feb 21 07:44:11 crc kubenswrapper[4820]: I0221 07:44:11.696866 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:44:11 crc kubenswrapper[4820]: E0221 07:44:11.697607 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:44:22 crc kubenswrapper[4820]: I0221 07:44:22.697191 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:44:23 crc kubenswrapper[4820]: I0221 07:44:23.765089 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"ac5595d8b4c98934f854ed3d9927562ebe603dd94584fa7e1c32a88718f68ed5"} Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.168583 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb"] Feb 21 07:45:00 crc kubenswrapper[4820]: E0221 07:45:00.169550 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="registry-server" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.169569 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="registry-server" Feb 21 07:45:00 crc kubenswrapper[4820]: E0221 07:45:00.169590 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="extract-utilities" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.169600 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="extract-utilities" Feb 21 07:45:00 crc kubenswrapper[4820]: E0221 07:45:00.169613 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="extract-content" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.169624 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="extract-content" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.169844 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="registry-server" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.170572 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.173638 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.173952 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.182168 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb"] Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.299722 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc522f8d-0981-40c6-a17f-c5517c78a9cd-secret-volume\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.299899 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv7b8\" (UniqueName: \"kubernetes.io/projected/bc522f8d-0981-40c6-a17f-c5517c78a9cd-kube-api-access-jv7b8\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.299970 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc522f8d-0981-40c6-a17f-c5517c78a9cd-config-volume\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.401346 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc522f8d-0981-40c6-a17f-c5517c78a9cd-config-volume\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.401405 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc522f8d-0981-40c6-a17f-c5517c78a9cd-secret-volume\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.401489 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv7b8\" (UniqueName: \"kubernetes.io/projected/bc522f8d-0981-40c6-a17f-c5517c78a9cd-kube-api-access-jv7b8\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.403966 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc522f8d-0981-40c6-a17f-c5517c78a9cd-config-volume\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.412646 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc522f8d-0981-40c6-a17f-c5517c78a9cd-secret-volume\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.420015 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv7b8\" (UniqueName: \"kubernetes.io/projected/bc522f8d-0981-40c6-a17f-c5517c78a9cd-kube-api-access-jv7b8\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.498328 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.764996 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb"] Feb 21 07:45:00 crc kubenswrapper[4820]: W0221 07:45:00.773165 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc522f8d_0981_40c6_a17f_c5517c78a9cd.slice/crio-9512df7df66a85a1af9a794299abd8d0407cba3e16f3b3a71fe90158d7aa6993 WatchSource:0}: Error finding container 9512df7df66a85a1af9a794299abd8d0407cba3e16f3b3a71fe90158d7aa6993: Status 404 returned error can't find the container with id 9512df7df66a85a1af9a794299abd8d0407cba3e16f3b3a71fe90158d7aa6993 Feb 21 07:45:01 crc kubenswrapper[4820]: I0221 07:45:01.093866 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" event={"ID":"bc522f8d-0981-40c6-a17f-c5517c78a9cd","Type":"ContainerStarted","Data":"5026a57c2b358309b7948ddf106308e40b701e9677338916048733307f4310bc"} Feb 21 07:45:01 crc kubenswrapper[4820]: I0221 07:45:01.094072 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" event={"ID":"bc522f8d-0981-40c6-a17f-c5517c78a9cd","Type":"ContainerStarted","Data":"9512df7df66a85a1af9a794299abd8d0407cba3e16f3b3a71fe90158d7aa6993"} Feb 21 07:45:01 crc kubenswrapper[4820]: I0221 07:45:01.109940 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" podStartSLOduration=1.109920052 podStartE2EDuration="1.109920052s" podCreationTimestamp="2026-02-21 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:45:01.109023879 +0000 UTC m=+3476.142108087" watchObservedRunningTime="2026-02-21 07:45:01.109920052 +0000 UTC m=+3476.143004250" Feb 21 07:45:02 crc kubenswrapper[4820]: I0221 07:45:02.104089 4820 generic.go:334] "Generic (PLEG): container finished" podID="bc522f8d-0981-40c6-a17f-c5517c78a9cd" containerID="5026a57c2b358309b7948ddf106308e40b701e9677338916048733307f4310bc" exitCode=0 Feb 21 07:45:02 crc kubenswrapper[4820]: I0221 07:45:02.104151 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" event={"ID":"bc522f8d-0981-40c6-a17f-c5517c78a9cd","Type":"ContainerDied","Data":"5026a57c2b358309b7948ddf106308e40b701e9677338916048733307f4310bc"} Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.488916 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.549441 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc522f8d-0981-40c6-a17f-c5517c78a9cd-secret-volume\") pod \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.549555 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv7b8\" (UniqueName: \"kubernetes.io/projected/bc522f8d-0981-40c6-a17f-c5517c78a9cd-kube-api-access-jv7b8\") pod \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.549734 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc522f8d-0981-40c6-a17f-c5517c78a9cd-config-volume\") pod \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.550308 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc522f8d-0981-40c6-a17f-c5517c78a9cd-config-volume" (OuterVolumeSpecName: "config-volume") pod "bc522f8d-0981-40c6-a17f-c5517c78a9cd" (UID: "bc522f8d-0981-40c6-a17f-c5517c78a9cd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.554334 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc522f8d-0981-40c6-a17f-c5517c78a9cd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bc522f8d-0981-40c6-a17f-c5517c78a9cd" (UID: "bc522f8d-0981-40c6-a17f-c5517c78a9cd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.556476 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc522f8d-0981-40c6-a17f-c5517c78a9cd-kube-api-access-jv7b8" (OuterVolumeSpecName: "kube-api-access-jv7b8") pod "bc522f8d-0981-40c6-a17f-c5517c78a9cd" (UID: "bc522f8d-0981-40c6-a17f-c5517c78a9cd"). InnerVolumeSpecName "kube-api-access-jv7b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.651458 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv7b8\" (UniqueName: \"kubernetes.io/projected/bc522f8d-0981-40c6-a17f-c5517c78a9cd-kube-api-access-jv7b8\") on node \"crc\" DevicePath \"\"" Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.651486 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc522f8d-0981-40c6-a17f-c5517c78a9cd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.651496 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc522f8d-0981-40c6-a17f-c5517c78a9cd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:45:04 crc kubenswrapper[4820]: I0221 07:45:04.123403 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" event={"ID":"bc522f8d-0981-40c6-a17f-c5517c78a9cd","Type":"ContainerDied","Data":"9512df7df66a85a1af9a794299abd8d0407cba3e16f3b3a71fe90158d7aa6993"} Feb 21 07:45:04 crc kubenswrapper[4820]: I0221 07:45:04.123717 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9512df7df66a85a1af9a794299abd8d0407cba3e16f3b3a71fe90158d7aa6993" Feb 21 07:45:04 crc kubenswrapper[4820]: I0221 07:45:04.123518 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:04 crc kubenswrapper[4820]: I0221 07:45:04.603454 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn"] Feb 21 07:45:04 crc kubenswrapper[4820]: I0221 07:45:04.613612 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn"] Feb 21 07:45:05 crc kubenswrapper[4820]: I0221 07:45:05.707954 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54597218-e332-4423-adc0-b4be2977a4ce" path="/var/lib/kubelet/pods/54597218-e332-4423-adc0-b4be2977a4ce/volumes" Feb 21 07:45:17 crc kubenswrapper[4820]: I0221 07:45:17.508942 4820 scope.go:117] "RemoveContainer" containerID="5520f9baaf36da34f01d9939d3174e22d3ad84830852ce6d62998744f623b758" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.038462 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h8lnd"] Feb 21 07:45:50 crc kubenswrapper[4820]: E0221 07:45:50.039738 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc522f8d-0981-40c6-a17f-c5517c78a9cd" containerName="collect-profiles" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.039762 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc522f8d-0981-40c6-a17f-c5517c78a9cd" containerName="collect-profiles" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.040017 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc522f8d-0981-40c6-a17f-c5517c78a9cd" containerName="collect-profiles" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.041878 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.043124 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8lnd"] Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.105757 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5791a2a-f861-4564-b560-cef4e1d2b529-catalog-content\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.105886 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5791a2a-f861-4564-b560-cef4e1d2b529-utilities\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.107407 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlxfh\" (UniqueName: \"kubernetes.io/projected/d5791a2a-f861-4564-b560-cef4e1d2b529-kube-api-access-rlxfh\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.209090 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5791a2a-f861-4564-b560-cef4e1d2b529-catalog-content\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.209176 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5791a2a-f861-4564-b560-cef4e1d2b529-utilities\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.209311 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlxfh\" (UniqueName: \"kubernetes.io/projected/d5791a2a-f861-4564-b560-cef4e1d2b529-kube-api-access-rlxfh\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.209798 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5791a2a-f861-4564-b560-cef4e1d2b529-catalog-content\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.209824 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5791a2a-f861-4564-b560-cef4e1d2b529-utilities\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.236900 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlxfh\" (UniqueName: \"kubernetes.io/projected/d5791a2a-f861-4564-b560-cef4e1d2b529-kube-api-access-rlxfh\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.392966 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.888856 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8lnd"] Feb 21 07:45:51 crc kubenswrapper[4820]: I0221 07:45:51.539761 4820 generic.go:334] "Generic (PLEG): container finished" podID="d5791a2a-f861-4564-b560-cef4e1d2b529" containerID="ba2fd74f17ff184d3f71a915ff4a6b54ee3d5b98b067962e92939d708ce2f3cd" exitCode=0 Feb 21 07:45:51 crc kubenswrapper[4820]: I0221 07:45:51.540152 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8lnd" event={"ID":"d5791a2a-f861-4564-b560-cef4e1d2b529","Type":"ContainerDied","Data":"ba2fd74f17ff184d3f71a915ff4a6b54ee3d5b98b067962e92939d708ce2f3cd"} Feb 21 07:45:51 crc kubenswrapper[4820]: I0221 07:45:51.540191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8lnd" event={"ID":"d5791a2a-f861-4564-b560-cef4e1d2b529","Type":"ContainerStarted","Data":"8e0f52b9f194ee392be24b10a7a304deaf167556f84f346a03c94b2df7969ae4"} Feb 21 07:45:51 crc kubenswrapper[4820]: I0221 07:45:51.542815 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:45:56 crc kubenswrapper[4820]: I0221 07:45:56.583818 4820 generic.go:334] "Generic (PLEG): container finished" podID="d5791a2a-f861-4564-b560-cef4e1d2b529" containerID="2d90eb386922e2185aefb6c54db664e6e352a006fb1ba2947fe80ebe5f7e2919" exitCode=0 Feb 21 07:45:56 crc kubenswrapper[4820]: I0221 07:45:56.583922 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8lnd" event={"ID":"d5791a2a-f861-4564-b560-cef4e1d2b529","Type":"ContainerDied","Data":"2d90eb386922e2185aefb6c54db664e6e352a006fb1ba2947fe80ebe5f7e2919"} Feb 21 07:45:57 crc kubenswrapper[4820]: I0221 07:45:57.596926 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8lnd" event={"ID":"d5791a2a-f861-4564-b560-cef4e1d2b529","Type":"ContainerStarted","Data":"c6d993edb50049b43123e771d9d6a7c32cee5f3eaa89ea0576b281ca5b59a11a"} Feb 21 07:45:57 crc kubenswrapper[4820]: I0221 07:45:57.623819 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h8lnd" podStartSLOduration=2.2239967099999998 podStartE2EDuration="7.623792726s" podCreationTimestamp="2026-02-21 07:45:50 +0000 UTC" firstStartedPulling="2026-02-21 07:45:51.542460469 +0000 UTC m=+3526.575544707" lastFinishedPulling="2026-02-21 07:45:56.942256525 +0000 UTC m=+3531.975340723" observedRunningTime="2026-02-21 07:45:57.618899474 +0000 UTC m=+3532.651983682" watchObservedRunningTime="2026-02-21 07:45:57.623792726 +0000 UTC m=+3532.656876934" Feb 21 07:46:00 crc kubenswrapper[4820]: I0221 07:46:00.393451 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:46:00 crc kubenswrapper[4820]: I0221 07:46:00.394163 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:46:00 crc kubenswrapper[4820]: I0221 07:46:00.458716 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:46:10 crc kubenswrapper[4820]: I0221 07:46:10.472912 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:46:10 crc kubenswrapper[4820]: I0221 07:46:10.561406 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8lnd"] Feb 21 07:46:10 crc kubenswrapper[4820]: I0221 07:46:10.604402 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5rj56"] Feb 21 07:46:10 crc kubenswrapper[4820]: I0221 07:46:10.604702 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5rj56" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="registry-server" containerID="cri-o://562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e" gracePeriod=2 Feb 21 07:46:10 crc kubenswrapper[4820]: I0221 07:46:10.992266 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rj56" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.042685 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-catalog-content\") pod \"a72aad09-5c42-41f0-9699-9160d1750191\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.042848 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz5g7\" (UniqueName: \"kubernetes.io/projected/a72aad09-5c42-41f0-9699-9160d1750191-kube-api-access-fz5g7\") pod \"a72aad09-5c42-41f0-9699-9160d1750191\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.042874 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-utilities\") pod \"a72aad09-5c42-41f0-9699-9160d1750191\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.043415 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-utilities" (OuterVolumeSpecName: "utilities") pod "a72aad09-5c42-41f0-9699-9160d1750191" (UID: "a72aad09-5c42-41f0-9699-9160d1750191"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.061471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a72aad09-5c42-41f0-9699-9160d1750191-kube-api-access-fz5g7" (OuterVolumeSpecName: "kube-api-access-fz5g7") pod "a72aad09-5c42-41f0-9699-9160d1750191" (UID: "a72aad09-5c42-41f0-9699-9160d1750191"). InnerVolumeSpecName "kube-api-access-fz5g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.097071 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a72aad09-5c42-41f0-9699-9160d1750191" (UID: "a72aad09-5c42-41f0-9699-9160d1750191"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.144360 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.144396 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz5g7\" (UniqueName: \"kubernetes.io/projected/a72aad09-5c42-41f0-9699-9160d1750191-kube-api-access-fz5g7\") on node \"crc\" DevicePath \"\"" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.144405 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.703532 4820 generic.go:334] "Generic (PLEG): container finished" podID="a72aad09-5c42-41f0-9699-9160d1750191" containerID="562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e" exitCode=0 Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.703706 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rj56" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.710688 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rj56" event={"ID":"a72aad09-5c42-41f0-9699-9160d1750191","Type":"ContainerDied","Data":"562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e"} Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.710733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rj56" event={"ID":"a72aad09-5c42-41f0-9699-9160d1750191","Type":"ContainerDied","Data":"a241b80262b56f5d048ff4666a6e3d23fdf812bb1aab7c42d8d4b602a3f884d7"} Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.710762 4820 scope.go:117] "RemoveContainer" containerID="562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.748522 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5rj56"] Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.752857 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5rj56"] Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.754015 4820 scope.go:117] "RemoveContainer" containerID="d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.771216 4820 scope.go:117] "RemoveContainer" containerID="0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.790279 4820 scope.go:117] "RemoveContainer" containerID="562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e" Feb 21 07:46:11 crc kubenswrapper[4820]: E0221 07:46:11.790954 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e\": container with ID starting with 562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e not found: ID does not exist" containerID="562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.791027 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e"} err="failed to get container status \"562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e\": rpc error: code = NotFound desc = could not find container \"562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e\": container with ID starting with 562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e not found: ID does not exist" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.791072 4820 scope.go:117] "RemoveContainer" containerID="d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766" Feb 21 07:46:11 crc kubenswrapper[4820]: E0221 07:46:11.791455 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766\": container with ID starting with d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766 not found: ID does not exist" containerID="d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.791505 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766"} err="failed to get container status \"d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766\": rpc error: code = NotFound desc = could not find container \"d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766\": container with ID starting with d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766 not found: ID does not exist" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.791527 4820 scope.go:117] "RemoveContainer" containerID="0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6" Feb 21 07:46:11 crc kubenswrapper[4820]: E0221 07:46:11.791941 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6\": container with ID starting with 0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6 not found: ID does not exist" containerID="0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.791979 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6"} err="failed to get container status \"0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6\": rpc error: code = NotFound desc = could not find container \"0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6\": container with ID starting with 0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6 not found: ID does not exist" Feb 21 07:46:13 crc kubenswrapper[4820]: I0221 07:46:13.710872 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a72aad09-5c42-41f0-9699-9160d1750191" path="/var/lib/kubelet/pods/a72aad09-5c42-41f0-9699-9160d1750191/volumes" Feb 21 07:46:43 crc kubenswrapper[4820]: I0221 07:46:43.816724 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:46:43 crc kubenswrapper[4820]: I0221 07:46:43.817221 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:47:13 crc kubenswrapper[4820]: I0221 07:47:13.816148 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:47:13 crc kubenswrapper[4820]: I0221 07:47:13.816757 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:47:43 crc kubenswrapper[4820]: I0221 07:47:43.817010 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:47:43 crc kubenswrapper[4820]: I0221 07:47:43.819652 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:47:43 crc kubenswrapper[4820]: I0221 07:47:43.819749 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:47:43 crc kubenswrapper[4820]: I0221 07:47:43.820686 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac5595d8b4c98934f854ed3d9927562ebe603dd94584fa7e1c32a88718f68ed5"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:47:43 crc kubenswrapper[4820]: I0221 07:47:43.820800 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://ac5595d8b4c98934f854ed3d9927562ebe603dd94584fa7e1c32a88718f68ed5" gracePeriod=600 Feb 21 07:47:44 crc kubenswrapper[4820]: I0221 07:47:44.455145 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="ac5595d8b4c98934f854ed3d9927562ebe603dd94584fa7e1c32a88718f68ed5" exitCode=0 Feb 21 07:47:44 crc kubenswrapper[4820]: I0221 07:47:44.455223 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"ac5595d8b4c98934f854ed3d9927562ebe603dd94584fa7e1c32a88718f68ed5"} Feb 21 07:47:44 crc kubenswrapper[4820]: I0221 07:47:44.455789 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8"} Feb 21 07:47:44 crc kubenswrapper[4820]: I0221 07:47:44.455816 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:50:13 crc kubenswrapper[4820]: I0221 07:50:13.816561 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:50:13 crc kubenswrapper[4820]: I0221 07:50:13.817162 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:50:43 crc kubenswrapper[4820]: I0221 07:50:43.817036 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:50:43 crc kubenswrapper[4820]: I0221 07:50:43.817742 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.085073 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nvld4"] Feb 21 07:51:06 crc kubenswrapper[4820]: E0221 07:51:06.087399 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="extract-content" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.087437 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="extract-content" Feb 21 07:51:06 crc kubenswrapper[4820]: E0221 07:51:06.087502 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="extract-utilities" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.087520 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="extract-utilities" Feb 21 07:51:06 crc kubenswrapper[4820]: E0221 07:51:06.087560 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="registry-server" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.087576 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="registry-server" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.087913 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="registry-server" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.090618 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.093675 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nvld4"] Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.108634 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ck94\" (UniqueName: \"kubernetes.io/projected/37061a92-ef34-4c34-a0c0-acb8ca735d72-kube-api-access-6ck94\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.108745 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-catalog-content\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.109024 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-utilities\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.214886 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ck94\" (UniqueName: \"kubernetes.io/projected/37061a92-ef34-4c34-a0c0-acb8ca735d72-kube-api-access-6ck94\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.214980 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-catalog-content\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.215027 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-utilities\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.215554 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-utilities\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.216194 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-catalog-content\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.256618 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ck94\" (UniqueName: \"kubernetes.io/projected/37061a92-ef34-4c34-a0c0-acb8ca735d72-kube-api-access-6ck94\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.429863 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.953340 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nvld4"] Feb 21 07:51:07 crc kubenswrapper[4820]: I0221 07:51:07.691006 4820 generic.go:334] "Generic (PLEG): container finished" podID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerID="ccfedcae81749135f20d9ed7246561c1f38a7a2e8ac32cd0806884306ac4ea4e" exitCode=0 Feb 21 07:51:07 crc kubenswrapper[4820]: I0221 07:51:07.691063 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvld4" event={"ID":"37061a92-ef34-4c34-a0c0-acb8ca735d72","Type":"ContainerDied","Data":"ccfedcae81749135f20d9ed7246561c1f38a7a2e8ac32cd0806884306ac4ea4e"} Feb 21 07:51:07 crc kubenswrapper[4820]: I0221 07:51:07.691144 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvld4" event={"ID":"37061a92-ef34-4c34-a0c0-acb8ca735d72","Type":"ContainerStarted","Data":"5aeca919d8786948187d2de5319582612e4556d145b9674f4edcf179da89ecc6"} Feb 21 07:51:07 crc kubenswrapper[4820]: I0221 07:51:07.694717 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:51:08 crc kubenswrapper[4820]: I0221 07:51:08.700934 4820 generic.go:334] "Generic (PLEG): container finished" podID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerID="3ef335f2a7ef2b50ea5743bd72dc7e2b76f53ee6d270222f4e60aaf4f0dcd3f8" exitCode=0 Feb 21 07:51:08 crc kubenswrapper[4820]: I0221 07:51:08.701148 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvld4" event={"ID":"37061a92-ef34-4c34-a0c0-acb8ca735d72","Type":"ContainerDied","Data":"3ef335f2a7ef2b50ea5743bd72dc7e2b76f53ee6d270222f4e60aaf4f0dcd3f8"} Feb 21 07:51:09 crc kubenswrapper[4820]: I0221 07:51:09.711190 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvld4" event={"ID":"37061a92-ef34-4c34-a0c0-acb8ca735d72","Type":"ContainerStarted","Data":"03268773776ed7972cff628196c80e7463cb141344b4d385ae506dfa501bd612"} Feb 21 07:51:09 crc kubenswrapper[4820]: I0221 07:51:09.735706 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nvld4" podStartSLOduration=2.285887618 podStartE2EDuration="3.735685086s" podCreationTimestamp="2026-02-21 07:51:06 +0000 UTC" firstStartedPulling="2026-02-21 07:51:07.694422904 +0000 UTC m=+3842.727507112" lastFinishedPulling="2026-02-21 07:51:09.144220372 +0000 UTC m=+3844.177304580" observedRunningTime="2026-02-21 07:51:09.729513169 +0000 UTC m=+3844.762597367" watchObservedRunningTime="2026-02-21 07:51:09.735685086 +0000 UTC m=+3844.768769284" Feb 21 07:51:13 crc kubenswrapper[4820]: I0221 07:51:13.816347 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:51:13 crc kubenswrapper[4820]: I0221 07:51:13.816963 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:51:13 crc kubenswrapper[4820]: I0221 07:51:13.817020 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:51:13 crc kubenswrapper[4820]: I0221 07:51:13.817829 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:51:13 crc kubenswrapper[4820]: I0221 07:51:13.817924 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" gracePeriod=600 Feb 21 07:51:14 crc kubenswrapper[4820]: E0221 07:51:14.455919 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:51:14 crc kubenswrapper[4820]: I0221 07:51:14.746676 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" exitCode=0 Feb 21 07:51:14 crc kubenswrapper[4820]: I0221 07:51:14.746715 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8"} Feb 21 07:51:14 crc kubenswrapper[4820]: I0221 07:51:14.746745 4820 scope.go:117] "RemoveContainer" containerID="ac5595d8b4c98934f854ed3d9927562ebe603dd94584fa7e1c32a88718f68ed5" Feb 21 07:51:14 crc kubenswrapper[4820]: I0221 07:51:14.747170 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:51:14 crc kubenswrapper[4820]: E0221 07:51:14.747402 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:51:16 crc kubenswrapper[4820]: I0221 07:51:16.429949 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:16 crc kubenswrapper[4820]: I0221 07:51:16.430814 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:16 crc kubenswrapper[4820]: I0221 07:51:16.479691 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:16 crc kubenswrapper[4820]: I0221 07:51:16.821647 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:16 crc kubenswrapper[4820]: I0221 07:51:16.872417 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nvld4"] Feb 21 07:51:18 crc kubenswrapper[4820]: I0221 07:51:18.776742 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nvld4" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="registry-server" containerID="cri-o://03268773776ed7972cff628196c80e7463cb141344b4d385ae506dfa501bd612" gracePeriod=2 Feb 21 07:51:19 crc kubenswrapper[4820]: I0221 07:51:19.798090 4820 generic.go:334] "Generic (PLEG): container finished" podID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerID="03268773776ed7972cff628196c80e7463cb141344b4d385ae506dfa501bd612" exitCode=0 Feb 21 07:51:19 crc kubenswrapper[4820]: I0221 07:51:19.798142 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvld4" event={"ID":"37061a92-ef34-4c34-a0c0-acb8ca735d72","Type":"ContainerDied","Data":"03268773776ed7972cff628196c80e7463cb141344b4d385ae506dfa501bd612"} Feb 21 07:51:19 crc kubenswrapper[4820]: I0221 07:51:19.877612 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.040905 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ck94\" (UniqueName: \"kubernetes.io/projected/37061a92-ef34-4c34-a0c0-acb8ca735d72-kube-api-access-6ck94\") pod \"37061a92-ef34-4c34-a0c0-acb8ca735d72\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.041100 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-utilities\") pod \"37061a92-ef34-4c34-a0c0-acb8ca735d72\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.041141 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-catalog-content\") pod \"37061a92-ef34-4c34-a0c0-acb8ca735d72\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.042917 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-utilities" (OuterVolumeSpecName: "utilities") pod "37061a92-ef34-4c34-a0c0-acb8ca735d72" (UID: "37061a92-ef34-4c34-a0c0-acb8ca735d72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.047220 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37061a92-ef34-4c34-a0c0-acb8ca735d72-kube-api-access-6ck94" (OuterVolumeSpecName: "kube-api-access-6ck94") pod "37061a92-ef34-4c34-a0c0-acb8ca735d72" (UID: "37061a92-ef34-4c34-a0c0-acb8ca735d72"). InnerVolumeSpecName "kube-api-access-6ck94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.109978 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37061a92-ef34-4c34-a0c0-acb8ca735d72" (UID: "37061a92-ef34-4c34-a0c0-acb8ca735d72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.142899 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.142939 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.142957 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ck94\" (UniqueName: \"kubernetes.io/projected/37061a92-ef34-4c34-a0c0-acb8ca735d72-kube-api-access-6ck94\") on node \"crc\" DevicePath \"\"" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.815690 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvld4" event={"ID":"37061a92-ef34-4c34-a0c0-acb8ca735d72","Type":"ContainerDied","Data":"5aeca919d8786948187d2de5319582612e4556d145b9674f4edcf179da89ecc6"} Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.815777 4820 scope.go:117] "RemoveContainer" containerID="03268773776ed7972cff628196c80e7463cb141344b4d385ae506dfa501bd612" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.815878 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.833946 4820 scope.go:117] "RemoveContainer" containerID="3ef335f2a7ef2b50ea5743bd72dc7e2b76f53ee6d270222f4e60aaf4f0dcd3f8" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.871131 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nvld4"] Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.872582 4820 scope.go:117] "RemoveContainer" containerID="ccfedcae81749135f20d9ed7246561c1f38a7a2e8ac32cd0806884306ac4ea4e" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.881511 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nvld4"] Feb 21 07:51:21 crc kubenswrapper[4820]: I0221 07:51:21.713129 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" path="/var/lib/kubelet/pods/37061a92-ef34-4c34-a0c0-acb8ca735d72/volumes" Feb 21 07:51:25 crc kubenswrapper[4820]: I0221 07:51:25.718175 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:51:25 crc kubenswrapper[4820]: E0221 07:51:25.721089 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:51:37 crc kubenswrapper[4820]: I0221 07:51:37.698586 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:51:37 crc kubenswrapper[4820]: E0221 07:51:37.699526 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:51:50 crc kubenswrapper[4820]: I0221 07:51:50.697619 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:51:50 crc kubenswrapper[4820]: E0221 07:51:50.698485 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:52:02 crc kubenswrapper[4820]: I0221 07:52:02.697591 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:52:02 crc kubenswrapper[4820]: E0221 07:52:02.699692 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:52:16 crc kubenswrapper[4820]: I0221 07:52:16.696815 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:52:16 crc kubenswrapper[4820]: E0221 07:52:16.697889 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:52:28 crc kubenswrapper[4820]: I0221 07:52:28.696428 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:52:28 crc kubenswrapper[4820]: E0221 07:52:28.697122 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:52:39 crc kubenswrapper[4820]: I0221 07:52:39.697127 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:52:39 crc kubenswrapper[4820]: E0221 07:52:39.697958 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:52:51 crc kubenswrapper[4820]: I0221 07:52:51.697459 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:52:51 crc kubenswrapper[4820]: E0221 07:52:51.698538 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:53:04 crc kubenswrapper[4820]: I0221 07:53:04.697130 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:53:04 crc kubenswrapper[4820]: E0221 07:53:04.698220 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:53:15 crc kubenswrapper[4820]: I0221 07:53:15.709001 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:53:15 crc kubenswrapper[4820]: E0221 07:53:15.710895 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:53:29 crc kubenswrapper[4820]: I0221 07:53:29.698215 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:53:29 crc kubenswrapper[4820]: E0221 07:53:29.698932 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:53:41 crc kubenswrapper[4820]: I0221 07:53:41.697225 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:53:41 crc kubenswrapper[4820]: E0221 07:53:41.702452 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:53:54 crc kubenswrapper[4820]: I0221 07:53:54.697117 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:53:54 crc kubenswrapper[4820]: E0221 07:53:54.697988 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:54:08 crc kubenswrapper[4820]: I0221 07:54:08.696708 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:54:08 crc kubenswrapper[4820]: E0221 07:54:08.697365 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:54:15 crc kubenswrapper[4820]: I0221 07:54:15.863558 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5qh6v"] Feb 21 07:54:15 crc kubenswrapper[4820]: E0221 07:54:15.864119 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="registry-server" Feb 21 07:54:15 crc kubenswrapper[4820]: I0221 07:54:15.864134 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="registry-server" Feb 21 07:54:15 crc kubenswrapper[4820]: E0221 07:54:15.864145 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="extract-content" Feb 21 07:54:15 crc kubenswrapper[4820]: I0221 07:54:15.864151 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="extract-content" Feb 21 07:54:15 crc kubenswrapper[4820]: E0221 07:54:15.864161 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="extract-utilities" Feb 21 07:54:15 crc kubenswrapper[4820]: I0221 07:54:15.864168 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="extract-utilities" Feb 21 07:54:15 crc kubenswrapper[4820]: I0221 07:54:15.864313 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="registry-server" Feb 21 07:54:15 crc kubenswrapper[4820]: I0221 07:54:15.865205 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:15 crc kubenswrapper[4820]: I0221 07:54:15.909173 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qh6v"] Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.044196 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-utilities\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.044422 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-catalog-content\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.044498 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9srpn\" (UniqueName: \"kubernetes.io/projected/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-kube-api-access-9srpn\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.145480 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9srpn\" (UniqueName: \"kubernetes.io/projected/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-kube-api-access-9srpn\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.145546 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-utilities\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.145605 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-catalog-content\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.145985 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-utilities\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.146006 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-catalog-content\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.164163 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9srpn\" (UniqueName: \"kubernetes.io/projected/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-kube-api-access-9srpn\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.210840 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.630714 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qh6v"] Feb 21 07:54:17 crc kubenswrapper[4820]: I0221 07:54:17.271559 4820 generic.go:334] "Generic (PLEG): container finished" podID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerID="551c7020b75f78fafdc92fa0996fa48ac32fef5cdae5fc05dac1a2ef77fc144d" exitCode=0 Feb 21 07:54:17 crc kubenswrapper[4820]: I0221 07:54:17.271605 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qh6v" event={"ID":"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0","Type":"ContainerDied","Data":"551c7020b75f78fafdc92fa0996fa48ac32fef5cdae5fc05dac1a2ef77fc144d"} Feb 21 07:54:17 crc kubenswrapper[4820]: I0221 07:54:17.271842 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qh6v" event={"ID":"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0","Type":"ContainerStarted","Data":"e47be5b63a22e1b81079b06c8168105121b628bdf4c0ea2144497c900368850f"} Feb 21 07:54:18 crc kubenswrapper[4820]: I0221 07:54:18.286038 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qh6v" event={"ID":"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0","Type":"ContainerStarted","Data":"9ea07a6df45e5af9d80dc98dc60ca5d5e8bf8a06521c6ff33d9f07118d3e86b8"} Feb 21 07:54:19 crc kubenswrapper[4820]: I0221 07:54:19.293270 4820 generic.go:334] "Generic (PLEG): container finished" podID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerID="9ea07a6df45e5af9d80dc98dc60ca5d5e8bf8a06521c6ff33d9f07118d3e86b8" exitCode=0 Feb 21 07:54:19 crc kubenswrapper[4820]: I0221 07:54:19.293314 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qh6v" event={"ID":"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0","Type":"ContainerDied","Data":"9ea07a6df45e5af9d80dc98dc60ca5d5e8bf8a06521c6ff33d9f07118d3e86b8"} Feb 21 07:54:19 crc kubenswrapper[4820]: I0221 07:54:19.697312 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:54:19 crc kubenswrapper[4820]: E0221 07:54:19.697634 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:54:20 crc kubenswrapper[4820]: I0221 07:54:20.304220 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qh6v" event={"ID":"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0","Type":"ContainerStarted","Data":"0942a51db64d709d78dadc64819bf491a188d02b79dd213a868b382714d64d0f"} Feb 21 07:54:26 crc kubenswrapper[4820]: I0221 07:54:26.211439 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:26 crc kubenswrapper[4820]: I0221 07:54:26.212019 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:27 crc kubenswrapper[4820]: I0221 07:54:27.259161 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5qh6v" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="registry-server" probeResult="failure" output=< Feb 21 07:54:27 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 07:54:27 crc kubenswrapper[4820]: > Feb 21 07:54:32 crc kubenswrapper[4820]: I0221 07:54:32.697522 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:54:32 crc kubenswrapper[4820]: E0221 07:54:32.698565 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:54:36 crc kubenswrapper[4820]: I0221 07:54:36.283866 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:36 crc kubenswrapper[4820]: I0221 07:54:36.305531 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5qh6v" podStartSLOduration=18.737537006 podStartE2EDuration="21.305516812s" podCreationTimestamp="2026-02-21 07:54:15 +0000 UTC" firstStartedPulling="2026-02-21 07:54:17.273389312 +0000 UTC m=+4032.306473500" lastFinishedPulling="2026-02-21 07:54:19.841369088 +0000 UTC m=+4034.874453306" observedRunningTime="2026-02-21 07:54:20.332533269 +0000 UTC m=+4035.365617527" watchObservedRunningTime="2026-02-21 07:54:36.305516812 +0000 UTC m=+4051.338601010" Feb 21 07:54:36 crc kubenswrapper[4820]: I0221 07:54:36.335133 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.053080 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fvqzv"] Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.062796 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.088203 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvqzv"] Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.189854 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-catalog-content\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.189931 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-utilities\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.190042 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zlzx\" (UniqueName: \"kubernetes.io/projected/89ea0d90-b48d-4619-930e-e2e56101d066-kube-api-access-7zlzx\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.291507 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-catalog-content\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.291563 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-utilities\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.291593 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zlzx\" (UniqueName: \"kubernetes.io/projected/89ea0d90-b48d-4619-930e-e2e56101d066-kube-api-access-7zlzx\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.292429 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-utilities\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.292445 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-catalog-content\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.312254 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zlzx\" (UniqueName: \"kubernetes.io/projected/89ea0d90-b48d-4619-930e-e2e56101d066-kube-api-access-7zlzx\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.397621 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.834830 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvqzv"] Feb 21 07:54:42 crc kubenswrapper[4820]: I0221 07:54:42.457616 4820 generic.go:334] "Generic (PLEG): container finished" podID="89ea0d90-b48d-4619-930e-e2e56101d066" containerID="c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7" exitCode=0 Feb 21 07:54:42 crc kubenswrapper[4820]: I0221 07:54:42.457722 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvqzv" event={"ID":"89ea0d90-b48d-4619-930e-e2e56101d066","Type":"ContainerDied","Data":"c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7"} Feb 21 07:54:42 crc kubenswrapper[4820]: I0221 07:54:42.457967 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvqzv" event={"ID":"89ea0d90-b48d-4619-930e-e2e56101d066","Type":"ContainerStarted","Data":"cab574a64fae6e111005a38e735d31a4069face554eedb6727fd214b99556119"} Feb 21 07:54:43 crc kubenswrapper[4820]: I0221 07:54:43.467903 4820 generic.go:334] "Generic (PLEG): container finished" podID="89ea0d90-b48d-4619-930e-e2e56101d066" containerID="57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66" exitCode=0 Feb 21 07:54:43 crc kubenswrapper[4820]: I0221 07:54:43.468098 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvqzv" event={"ID":"89ea0d90-b48d-4619-930e-e2e56101d066","Type":"ContainerDied","Data":"57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66"} Feb 21 07:54:43 crc kubenswrapper[4820]: I0221 07:54:43.841745 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qh6v"] Feb 21 07:54:43 crc kubenswrapper[4820]: I0221 07:54:43.842014 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5qh6v" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="registry-server" containerID="cri-o://0942a51db64d709d78dadc64819bf491a188d02b79dd213a868b382714d64d0f" gracePeriod=2 Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.483059 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvqzv" event={"ID":"89ea0d90-b48d-4619-930e-e2e56101d066","Type":"ContainerStarted","Data":"728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a"} Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.487652 4820 generic.go:334] "Generic (PLEG): container finished" podID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerID="0942a51db64d709d78dadc64819bf491a188d02b79dd213a868b382714d64d0f" exitCode=0 Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.487700 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qh6v" event={"ID":"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0","Type":"ContainerDied","Data":"0942a51db64d709d78dadc64819bf491a188d02b79dd213a868b382714d64d0f"} Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.653935 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.688946 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fvqzv" podStartSLOduration=2.290806141 podStartE2EDuration="3.688915258s" podCreationTimestamp="2026-02-21 07:54:41 +0000 UTC" firstStartedPulling="2026-02-21 07:54:42.459468149 +0000 UTC m=+4057.492552357" lastFinishedPulling="2026-02-21 07:54:43.857577236 +0000 UTC m=+4058.890661474" observedRunningTime="2026-02-21 07:54:44.518612257 +0000 UTC m=+4059.551696465" watchObservedRunningTime="2026-02-21 07:54:44.688915258 +0000 UTC m=+4059.721999496" Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.742975 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-catalog-content\") pod \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.743069 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9srpn\" (UniqueName: \"kubernetes.io/projected/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-kube-api-access-9srpn\") pod \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.743184 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-utilities\") pod \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.744122 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-utilities" (OuterVolumeSpecName: "utilities") pod "406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" (UID: "406e270d-46a1-46ad-9cdf-2c4edc1c2eb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.749203 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-kube-api-access-9srpn" (OuterVolumeSpecName: "kube-api-access-9srpn") pod "406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" (UID: "406e270d-46a1-46ad-9cdf-2c4edc1c2eb0"). InnerVolumeSpecName "kube-api-access-9srpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.844468 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.844502 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9srpn\" (UniqueName: \"kubernetes.io/projected/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-kube-api-access-9srpn\") on node \"crc\" DevicePath \"\"" Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.902585 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" (UID: "406e270d-46a1-46ad-9cdf-2c4edc1c2eb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.945439 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.501589 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.501591 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qh6v" event={"ID":"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0","Type":"ContainerDied","Data":"e47be5b63a22e1b81079b06c8168105121b628bdf4c0ea2144497c900368850f"} Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.502129 4820 scope.go:117] "RemoveContainer" containerID="0942a51db64d709d78dadc64819bf491a188d02b79dd213a868b382714d64d0f" Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.530981 4820 scope.go:117] "RemoveContainer" containerID="9ea07a6df45e5af9d80dc98dc60ca5d5e8bf8a06521c6ff33d9f07118d3e86b8" Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.557889 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qh6v"] Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.587414 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5qh6v"] Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.596664 4820 scope.go:117] "RemoveContainer" containerID="551c7020b75f78fafdc92fa0996fa48ac32fef5cdae5fc05dac1a2ef77fc144d" Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.707784 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" path="/var/lib/kubelet/pods/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0/volumes" Feb 21 07:54:47 crc kubenswrapper[4820]: I0221 07:54:47.696872 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:54:47 crc kubenswrapper[4820]: E0221 07:54:47.697195 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:54:51 crc kubenswrapper[4820]: I0221 07:54:51.398352 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:51 crc kubenswrapper[4820]: I0221 07:54:51.398799 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:51 crc kubenswrapper[4820]: I0221 07:54:51.477204 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:51 crc kubenswrapper[4820]: I0221 07:54:51.595438 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:52 crc kubenswrapper[4820]: I0221 07:54:52.043355 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvqzv"] Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.568604 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fvqzv" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="registry-server" containerID="cri-o://728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a" gracePeriod=2 Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.930264 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.975291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-catalog-content\") pod \"89ea0d90-b48d-4619-930e-e2e56101d066\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.975466 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-utilities\") pod \"89ea0d90-b48d-4619-930e-e2e56101d066\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.976331 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zlzx\" (UniqueName: \"kubernetes.io/projected/89ea0d90-b48d-4619-930e-e2e56101d066-kube-api-access-7zlzx\") pod \"89ea0d90-b48d-4619-930e-e2e56101d066\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.976411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-utilities" (OuterVolumeSpecName: "utilities") pod "89ea0d90-b48d-4619-930e-e2e56101d066" (UID: "89ea0d90-b48d-4619-930e-e2e56101d066"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.976938 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.981038 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ea0d90-b48d-4619-930e-e2e56101d066-kube-api-access-7zlzx" (OuterVolumeSpecName: "kube-api-access-7zlzx") pod "89ea0d90-b48d-4619-930e-e2e56101d066" (UID: "89ea0d90-b48d-4619-930e-e2e56101d066"). InnerVolumeSpecName "kube-api-access-7zlzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.998545 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89ea0d90-b48d-4619-930e-e2e56101d066" (UID: "89ea0d90-b48d-4619-930e-e2e56101d066"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.078620 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zlzx\" (UniqueName: \"kubernetes.io/projected/89ea0d90-b48d-4619-930e-e2e56101d066-kube-api-access-7zlzx\") on node \"crc\" DevicePath \"\"" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.078670 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.576608 4820 generic.go:334] "Generic (PLEG): container finished" podID="89ea0d90-b48d-4619-930e-e2e56101d066" containerID="728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a" exitCode=0 Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.576639 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvqzv" event={"ID":"89ea0d90-b48d-4619-930e-e2e56101d066","Type":"ContainerDied","Data":"728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a"} Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.576684 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.576706 4820 scope.go:117] "RemoveContainer" containerID="728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.576690 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvqzv" event={"ID":"89ea0d90-b48d-4619-930e-e2e56101d066","Type":"ContainerDied","Data":"cab574a64fae6e111005a38e735d31a4069face554eedb6727fd214b99556119"} Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.609781 4820 scope.go:117] "RemoveContainer" containerID="57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.613523 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvqzv"] Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.618574 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvqzv"] Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.644857 4820 scope.go:117] "RemoveContainer" containerID="c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.674883 4820 scope.go:117] "RemoveContainer" containerID="728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a" Feb 21 07:54:54 crc kubenswrapper[4820]: E0221 07:54:54.675552 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a\": container with ID starting with 728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a not found: ID does not exist" containerID="728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.675601 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a"} err="failed to get container status \"728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a\": rpc error: code = NotFound desc = could not find container \"728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a\": container with ID starting with 728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a not found: ID does not exist" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.675638 4820 scope.go:117] "RemoveContainer" containerID="57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66" Feb 21 07:54:54 crc kubenswrapper[4820]: E0221 07:54:54.675977 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66\": container with ID starting with 57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66 not found: ID does not exist" containerID="57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.676020 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66"} err="failed to get container status \"57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66\": rpc error: code = NotFound desc = could not find container \"57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66\": container with ID starting with 57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66 not found: ID does not exist" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.676049 4820 scope.go:117] "RemoveContainer" containerID="c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7" Feb 21 07:54:54 crc kubenswrapper[4820]: E0221 07:54:54.676414 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7\": container with ID starting with c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7 not found: ID does not exist" containerID="c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.676572 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7"} err="failed to get container status \"c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7\": rpc error: code = NotFound desc = could not find container \"c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7\": container with ID starting with c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7 not found: ID does not exist" Feb 21 07:54:55 crc kubenswrapper[4820]: I0221 07:54:55.714801 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" path="/var/lib/kubelet/pods/89ea0d90-b48d-4619-930e-e2e56101d066/volumes" Feb 21 07:54:58 crc kubenswrapper[4820]: I0221 07:54:58.696829 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:54:58 crc kubenswrapper[4820]: E0221 07:54:58.697125 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:55:11 crc kubenswrapper[4820]: I0221 07:55:11.697958 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:55:11 crc kubenswrapper[4820]: E0221 07:55:11.700065 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:55:22 crc kubenswrapper[4820]: I0221 07:55:22.697421 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:55:22 crc kubenswrapper[4820]: E0221 07:55:22.698036 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:55:36 crc kubenswrapper[4820]: I0221 07:55:36.697311 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:55:36 crc kubenswrapper[4820]: E0221 07:55:36.699100 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:55:51 crc kubenswrapper[4820]: I0221 07:55:51.696866 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:55:51 crc kubenswrapper[4820]: E0221 07:55:51.698079 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:56:06 crc kubenswrapper[4820]: I0221 07:56:06.696751 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:56:06 crc kubenswrapper[4820]: E0221 07:56:06.697572 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.865411 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9b2lv"] Feb 21 07:56:15 crc kubenswrapper[4820]: E0221 07:56:15.867033 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="registry-server" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.867132 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="registry-server" Feb 21 07:56:15 crc kubenswrapper[4820]: E0221 07:56:15.867198 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="extract-utilities" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.867278 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="extract-utilities" Feb 21 07:56:15 crc kubenswrapper[4820]: E0221 07:56:15.867343 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="registry-server" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.867404 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="registry-server" Feb 21 07:56:15 crc kubenswrapper[4820]: E0221 07:56:15.867468 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="extract-content" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.867530 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="extract-content" Feb 21 07:56:15 crc kubenswrapper[4820]: E0221 07:56:15.867592 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="extract-utilities" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.867681 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="extract-utilities" Feb 21 07:56:15 crc kubenswrapper[4820]: E0221 07:56:15.867765 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="extract-content" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.867836 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="extract-content" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.868075 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="registry-server" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.868175 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="registry-server" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.869383 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.885902 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9b2lv"] Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.992498 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-utilities\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.992721 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-catalog-content\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.992805 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdzrw\" (UniqueName: \"kubernetes.io/projected/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-kube-api-access-zdzrw\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.094529 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-catalog-content\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.094582 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdzrw\" (UniqueName: \"kubernetes.io/projected/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-kube-api-access-zdzrw\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.094625 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-utilities\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.095181 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-catalog-content\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.095183 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-utilities\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.122723 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdzrw\" (UniqueName: \"kubernetes.io/projected/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-kube-api-access-zdzrw\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.206549 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.702592 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9b2lv"] Feb 21 07:56:17 crc kubenswrapper[4820]: I0221 07:56:17.224069 4820 generic.go:334] "Generic (PLEG): container finished" podID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerID="ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066" exitCode=0 Feb 21 07:56:17 crc kubenswrapper[4820]: I0221 07:56:17.224111 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b2lv" event={"ID":"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1","Type":"ContainerDied","Data":"ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066"} Feb 21 07:56:17 crc kubenswrapper[4820]: I0221 07:56:17.224140 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b2lv" event={"ID":"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1","Type":"ContainerStarted","Data":"dc86ced4a4e068c48caf8132e6efbb20cfa16413fa778776951f2112849a0a68"} Feb 21 07:56:17 crc kubenswrapper[4820]: I0221 07:56:17.226216 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:56:18 crc kubenswrapper[4820]: I0221 07:56:18.697619 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:56:19 crc kubenswrapper[4820]: I0221 07:56:19.240136 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"346b88f027dcad0ac2b4eeb21d59bcbf957c89a096d7ee7e9fc70f006fda192b"} Feb 21 07:56:19 crc kubenswrapper[4820]: I0221 07:56:19.241841 4820 generic.go:334] "Generic (PLEG): container finished" podID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerID="7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e" exitCode=0 Feb 21 07:56:19 crc kubenswrapper[4820]: I0221 07:56:19.241886 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b2lv" event={"ID":"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1","Type":"ContainerDied","Data":"7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e"} Feb 21 07:56:20 crc kubenswrapper[4820]: I0221 07:56:20.257832 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b2lv" event={"ID":"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1","Type":"ContainerStarted","Data":"a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb"} Feb 21 07:56:20 crc kubenswrapper[4820]: I0221 07:56:20.287991 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9b2lv" podStartSLOduration=2.871190908 podStartE2EDuration="5.287966998s" podCreationTimestamp="2026-02-21 07:56:15 +0000 UTC" firstStartedPulling="2026-02-21 07:56:17.225879302 +0000 UTC m=+4152.258963510" lastFinishedPulling="2026-02-21 07:56:19.642655402 +0000 UTC m=+4154.675739600" observedRunningTime="2026-02-21 07:56:20.280571968 +0000 UTC m=+4155.313656196" watchObservedRunningTime="2026-02-21 07:56:20.287966998 +0000 UTC m=+4155.321051216" Feb 21 07:56:26 crc kubenswrapper[4820]: I0221 07:56:26.207553 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:26 crc kubenswrapper[4820]: I0221 07:56:26.207868 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:26 crc kubenswrapper[4820]: I0221 07:56:26.250979 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:26 crc kubenswrapper[4820]: I0221 07:56:26.354310 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:26 crc kubenswrapper[4820]: I0221 07:56:26.651262 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9b2lv"] Feb 21 07:56:28 crc kubenswrapper[4820]: I0221 07:56:28.319807 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9b2lv" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="registry-server" containerID="cri-o://a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb" gracePeriod=2 Feb 21 07:56:28 crc kubenswrapper[4820]: I0221 07:56:28.773813 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:28 crc kubenswrapper[4820]: I0221 07:56:28.974990 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdzrw\" (UniqueName: \"kubernetes.io/projected/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-kube-api-access-zdzrw\") pod \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " Feb 21 07:56:28 crc kubenswrapper[4820]: I0221 07:56:28.975087 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-utilities\") pod \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " Feb 21 07:56:28 crc kubenswrapper[4820]: I0221 07:56:28.975156 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-catalog-content\") pod \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " Feb 21 07:56:28 crc kubenswrapper[4820]: I0221 07:56:28.976893 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-utilities" (OuterVolumeSpecName: "utilities") pod "9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" (UID: "9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:56:28 crc kubenswrapper[4820]: I0221 07:56:28.980693 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-kube-api-access-zdzrw" (OuterVolumeSpecName: "kube-api-access-zdzrw") pod "9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" (UID: "9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1"). InnerVolumeSpecName "kube-api-access-zdzrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.076330 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.077010 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdzrw\" (UniqueName: \"kubernetes.io/projected/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-kube-api-access-zdzrw\") on node \"crc\" DevicePath \"\"" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.330532 4820 generic.go:334] "Generic (PLEG): container finished" podID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerID="a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb" exitCode=0 Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.330583 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b2lv" event={"ID":"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1","Type":"ContainerDied","Data":"a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb"} Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.330613 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b2lv" event={"ID":"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1","Type":"ContainerDied","Data":"dc86ced4a4e068c48caf8132e6efbb20cfa16413fa778776951f2112849a0a68"} Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.330633 4820 scope.go:117] "RemoveContainer" containerID="a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.330780 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.351198 4820 scope.go:117] "RemoveContainer" containerID="7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.370191 4820 scope.go:117] "RemoveContainer" containerID="ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.398830 4820 scope.go:117] "RemoveContainer" containerID="a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb" Feb 21 07:56:29 crc kubenswrapper[4820]: E0221 07:56:29.399220 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb\": container with ID starting with a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb not found: ID does not exist" containerID="a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.399281 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb"} err="failed to get container status \"a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb\": rpc error: code = NotFound desc = could not find container \"a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb\": container with ID starting with a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb not found: ID does not exist" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.399305 4820 scope.go:117] "RemoveContainer" containerID="7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e" Feb 21 07:56:29 crc kubenswrapper[4820]: E0221 07:56:29.399799 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e\": container with ID starting with 7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e not found: ID does not exist" containerID="7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.399826 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e"} err="failed to get container status \"7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e\": rpc error: code = NotFound desc = could not find container \"7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e\": container with ID starting with 7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e not found: ID does not exist" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.399844 4820 scope.go:117] "RemoveContainer" containerID="ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066" Feb 21 07:56:29 crc kubenswrapper[4820]: E0221 07:56:29.400299 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066\": container with ID starting with ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066 not found: ID does not exist" containerID="ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.400340 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066"} err="failed to get container status \"ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066\": rpc error: code = NotFound desc = could not find container \"ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066\": container with ID starting with ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066 not found: ID does not exist" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.623289 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" (UID: "9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.673536 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9b2lv"] Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.681228 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9b2lv"] Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.685832 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.705685 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" path="/var/lib/kubelet/pods/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1/volumes" Feb 21 07:58:43 crc kubenswrapper[4820]: I0221 07:58:43.815682 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:58:43 crc kubenswrapper[4820]: I0221 07:58:43.816181 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:59:13 crc kubenswrapper[4820]: I0221 07:59:13.816406 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:59:13 crc kubenswrapper[4820]: I0221 07:59:13.817157 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:59:43 crc kubenswrapper[4820]: I0221 07:59:43.816212 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:59:43 crc kubenswrapper[4820]: I0221 07:59:43.816898 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:59:43 crc kubenswrapper[4820]: I0221 07:59:43.816975 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:59:43 crc kubenswrapper[4820]: I0221 07:59:43.818265 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"346b88f027dcad0ac2b4eeb21d59bcbf957c89a096d7ee7e9fc70f006fda192b"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:59:43 crc kubenswrapper[4820]: I0221 07:59:43.818502 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://346b88f027dcad0ac2b4eeb21d59bcbf957c89a096d7ee7e9fc70f006fda192b" gracePeriod=600 Feb 21 07:59:44 crc kubenswrapper[4820]: I0221 07:59:44.875206 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="346b88f027dcad0ac2b4eeb21d59bcbf957c89a096d7ee7e9fc70f006fda192b" exitCode=0 Feb 21 07:59:44 crc kubenswrapper[4820]: I0221 07:59:44.875551 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"346b88f027dcad0ac2b4eeb21d59bcbf957c89a096d7ee7e9fc70f006fda192b"} Feb 21 07:59:44 crc kubenswrapper[4820]: I0221 07:59:44.875737 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83"} Feb 21 07:59:44 crc kubenswrapper[4820]: I0221 07:59:44.875758 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.192330 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5"] Feb 21 08:00:00 crc kubenswrapper[4820]: E0221 08:00:00.193187 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="extract-utilities" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.193203 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="extract-utilities" Feb 21 08:00:00 crc kubenswrapper[4820]: E0221 08:00:00.193219 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="extract-content" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.193225 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="extract-content" Feb 21 08:00:00 crc kubenswrapper[4820]: E0221 08:00:00.193272 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="registry-server" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.193281 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="registry-server" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.193408 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="registry-server" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.194021 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.198041 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.201433 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5"] Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.204294 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.306897 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-secret-volume\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.307036 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8c2t\" (UniqueName: \"kubernetes.io/projected/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-kube-api-access-q8c2t\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.307941 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-config-volume\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.409507 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-secret-volume\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.409937 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8c2t\" (UniqueName: \"kubernetes.io/projected/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-kube-api-access-q8c2t\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.410003 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-config-volume\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.411032 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-config-volume\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.418049 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-secret-volume\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.446417 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8c2t\" (UniqueName: \"kubernetes.io/projected/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-kube-api-access-q8c2t\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.541221 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.961118 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5"] Feb 21 08:00:00 crc kubenswrapper[4820]: W0221 08:00:00.964415 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod053b4929_8cfe_48ef_b6ab_d57fa3eeebc1.slice/crio-fbf4137d62ae2ed6169a531d457bf81f0f8620f27e990dac138bcc1916476526 WatchSource:0}: Error finding container fbf4137d62ae2ed6169a531d457bf81f0f8620f27e990dac138bcc1916476526: Status 404 returned error can't find the container with id fbf4137d62ae2ed6169a531d457bf81f0f8620f27e990dac138bcc1916476526 Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.993283 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" event={"ID":"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1","Type":"ContainerStarted","Data":"fbf4137d62ae2ed6169a531d457bf81f0f8620f27e990dac138bcc1916476526"} Feb 21 08:00:02 crc kubenswrapper[4820]: I0221 08:00:02.000433 4820 generic.go:334] "Generic (PLEG): container finished" podID="053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" containerID="2044ba44e2360265584b1f1c99572b402737919ae46c5dc3430e7ebdb548610f" exitCode=0 Feb 21 08:00:02 crc kubenswrapper[4820]: I0221 08:00:02.000518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" event={"ID":"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1","Type":"ContainerDied","Data":"2044ba44e2360265584b1f1c99572b402737919ae46c5dc3430e7ebdb548610f"} Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.268760 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.356166 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-config-volume\") pod \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.356338 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-secret-volume\") pod \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.356382 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8c2t\" (UniqueName: \"kubernetes.io/projected/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-kube-api-access-q8c2t\") pod \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.356985 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-config-volume" (OuterVolumeSpecName: "config-volume") pod "053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" (UID: "053b4929-8cfe-48ef-b6ab-d57fa3eeebc1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.360967 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" (UID: "053b4929-8cfe-48ef-b6ab-d57fa3eeebc1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.361060 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-kube-api-access-q8c2t" (OuterVolumeSpecName: "kube-api-access-q8c2t") pod "053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" (UID: "053b4929-8cfe-48ef-b6ab-d57fa3eeebc1"). InnerVolumeSpecName "kube-api-access-q8c2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.457365 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.457396 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.457407 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8c2t\" (UniqueName: \"kubernetes.io/projected/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-kube-api-access-q8c2t\") on node \"crc\" DevicePath \"\"" Feb 21 08:00:04 crc kubenswrapper[4820]: I0221 08:00:04.014821 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" event={"ID":"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1","Type":"ContainerDied","Data":"fbf4137d62ae2ed6169a531d457bf81f0f8620f27e990dac138bcc1916476526"} Feb 21 08:00:04 crc kubenswrapper[4820]: I0221 08:00:04.014859 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbf4137d62ae2ed6169a531d457bf81f0f8620f27e990dac138bcc1916476526" Feb 21 08:00:04 crc kubenswrapper[4820]: I0221 08:00:04.014874 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:04 crc kubenswrapper[4820]: I0221 08:00:04.341869 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt"] Feb 21 08:00:04 crc kubenswrapper[4820]: I0221 08:00:04.348048 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt"] Feb 21 08:00:05 crc kubenswrapper[4820]: I0221 08:00:05.703882 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebbbeb29-093d-424c-aa21-a711f564f201" path="/var/lib/kubelet/pods/ebbbeb29-093d-424c-aa21-a711f564f201/volumes" Feb 21 08:00:17 crc kubenswrapper[4820]: I0221 08:00:17.881971 4820 scope.go:117] "RemoveContainer" containerID="a723e81e08af1fbe61c3aa1a83712ca47314287f719a875048e1f08fe12358d0" Feb 21 08:01:19 crc kubenswrapper[4820]: I0221 08:01:19.921186 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-p4pxl"] Feb 21 08:01:19 crc kubenswrapper[4820]: I0221 08:01:19.926119 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-p4pxl"] Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.048294 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-kplkz"] Feb 21 08:01:20 crc kubenswrapper[4820]: E0221 08:01:20.048677 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" containerName="collect-profiles" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.048697 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" containerName="collect-profiles" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.048819 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" containerName="collect-profiles" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.049290 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.053234 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.054263 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.055120 4820 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-45wzb" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.058494 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.059048 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kplkz"] Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.076915 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/16aa1c55-d991-403c-afc5-a7b85d23c010-crc-storage\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.076956 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5d7f\" (UniqueName: \"kubernetes.io/projected/16aa1c55-d991-403c-afc5-a7b85d23c010-kube-api-access-z5d7f\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.077013 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/16aa1c55-d991-403c-afc5-a7b85d23c010-node-mnt\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.178194 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/16aa1c55-d991-403c-afc5-a7b85d23c010-crc-storage\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.178259 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5d7f\" (UniqueName: \"kubernetes.io/projected/16aa1c55-d991-403c-afc5-a7b85d23c010-kube-api-access-z5d7f\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.178314 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/16aa1c55-d991-403c-afc5-a7b85d23c010-node-mnt\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.178590 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/16aa1c55-d991-403c-afc5-a7b85d23c010-node-mnt\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.178936 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/16aa1c55-d991-403c-afc5-a7b85d23c010-crc-storage\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.195711 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5d7f\" (UniqueName: \"kubernetes.io/projected/16aa1c55-d991-403c-afc5-a7b85d23c010-kube-api-access-z5d7f\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.371748 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.788262 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kplkz"] Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.795064 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:01:21 crc kubenswrapper[4820]: I0221 08:01:21.636891 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kplkz" event={"ID":"16aa1c55-d991-403c-afc5-a7b85d23c010","Type":"ContainerStarted","Data":"abe2f0576407f2db8b453915dbe4741e83b046e73205be5c6cbf759bce72a106"} Feb 21 08:01:21 crc kubenswrapper[4820]: I0221 08:01:21.636948 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kplkz" event={"ID":"16aa1c55-d991-403c-afc5-a7b85d23c010","Type":"ContainerStarted","Data":"e44c8913d6409c2f44eaa80f4035e9e410c6bf752da88f9264ce2373fbeb87f0"} Feb 21 08:01:21 crc kubenswrapper[4820]: I0221 08:01:21.708432 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c764255-4b53-476b-ad40-4bd38c76f92c" path="/var/lib/kubelet/pods/3c764255-4b53-476b-ad40-4bd38c76f92c/volumes" Feb 21 08:01:22 crc kubenswrapper[4820]: I0221 08:01:22.646488 4820 generic.go:334] "Generic (PLEG): container finished" podID="16aa1c55-d991-403c-afc5-a7b85d23c010" containerID="abe2f0576407f2db8b453915dbe4741e83b046e73205be5c6cbf759bce72a106" exitCode=0 Feb 21 08:01:22 crc kubenswrapper[4820]: I0221 08:01:22.646526 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kplkz" event={"ID":"16aa1c55-d991-403c-afc5-a7b85d23c010","Type":"ContainerDied","Data":"abe2f0576407f2db8b453915dbe4741e83b046e73205be5c6cbf759bce72a106"} Feb 21 08:01:23 crc kubenswrapper[4820]: I0221 08:01:23.894779 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:23 crc kubenswrapper[4820]: I0221 08:01:23.930686 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5d7f\" (UniqueName: \"kubernetes.io/projected/16aa1c55-d991-403c-afc5-a7b85d23c010-kube-api-access-z5d7f\") pod \"16aa1c55-d991-403c-afc5-a7b85d23c010\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " Feb 21 08:01:23 crc kubenswrapper[4820]: I0221 08:01:23.930758 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/16aa1c55-d991-403c-afc5-a7b85d23c010-crc-storage\") pod \"16aa1c55-d991-403c-afc5-a7b85d23c010\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " Feb 21 08:01:23 crc kubenswrapper[4820]: I0221 08:01:23.930777 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/16aa1c55-d991-403c-afc5-a7b85d23c010-node-mnt\") pod \"16aa1c55-d991-403c-afc5-a7b85d23c010\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " Feb 21 08:01:23 crc kubenswrapper[4820]: I0221 08:01:23.930960 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16aa1c55-d991-403c-afc5-a7b85d23c010-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "16aa1c55-d991-403c-afc5-a7b85d23c010" (UID: "16aa1c55-d991-403c-afc5-a7b85d23c010"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 08:01:23 crc kubenswrapper[4820]: I0221 08:01:23.936131 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16aa1c55-d991-403c-afc5-a7b85d23c010-kube-api-access-z5d7f" (OuterVolumeSpecName: "kube-api-access-z5d7f") pod "16aa1c55-d991-403c-afc5-a7b85d23c010" (UID: "16aa1c55-d991-403c-afc5-a7b85d23c010"). InnerVolumeSpecName "kube-api-access-z5d7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:01:23 crc kubenswrapper[4820]: I0221 08:01:23.949782 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16aa1c55-d991-403c-afc5-a7b85d23c010-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "16aa1c55-d991-403c-afc5-a7b85d23c010" (UID: "16aa1c55-d991-403c-afc5-a7b85d23c010"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:01:24 crc kubenswrapper[4820]: I0221 08:01:24.032657 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5d7f\" (UniqueName: \"kubernetes.io/projected/16aa1c55-d991-403c-afc5-a7b85d23c010-kube-api-access-z5d7f\") on node \"crc\" DevicePath \"\"" Feb 21 08:01:24 crc kubenswrapper[4820]: I0221 08:01:24.032697 4820 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/16aa1c55-d991-403c-afc5-a7b85d23c010-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 21 08:01:24 crc kubenswrapper[4820]: I0221 08:01:24.032713 4820 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/16aa1c55-d991-403c-afc5-a7b85d23c010-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 21 08:01:24 crc kubenswrapper[4820]: I0221 08:01:24.661835 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kplkz" event={"ID":"16aa1c55-d991-403c-afc5-a7b85d23c010","Type":"ContainerDied","Data":"e44c8913d6409c2f44eaa80f4035e9e410c6bf752da88f9264ce2373fbeb87f0"} Feb 21 08:01:24 crc kubenswrapper[4820]: I0221 08:01:24.661877 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e44c8913d6409c2f44eaa80f4035e9e410c6bf752da88f9264ce2373fbeb87f0" Feb 21 08:01:24 crc kubenswrapper[4820]: I0221 08:01:24.661882 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.055665 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-kplkz"] Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.061090 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-kplkz"] Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.199873 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-z77q6"] Feb 21 08:01:26 crc kubenswrapper[4820]: E0221 08:01:26.200210 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16aa1c55-d991-403c-afc5-a7b85d23c010" containerName="storage" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.200227 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="16aa1c55-d991-403c-afc5-a7b85d23c010" containerName="storage" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.200492 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="16aa1c55-d991-403c-afc5-a7b85d23c010" containerName="storage" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.201064 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.202585 4820 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-45wzb" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.204711 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.204869 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.205607 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.208408 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-z77q6"] Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.263616 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3425985d-d02a-4566-b1c3-ae15f48de2a0-crc-storage\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.263671 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3425985d-d02a-4566-b1c3-ae15f48de2a0-node-mnt\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.263729 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wn4z\" (UniqueName: \"kubernetes.io/projected/3425985d-d02a-4566-b1c3-ae15f48de2a0-kube-api-access-5wn4z\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.364674 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3425985d-d02a-4566-b1c3-ae15f48de2a0-crc-storage\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.364743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3425985d-d02a-4566-b1c3-ae15f48de2a0-node-mnt\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.364815 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wn4z\" (UniqueName: \"kubernetes.io/projected/3425985d-d02a-4566-b1c3-ae15f48de2a0-kube-api-access-5wn4z\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.365184 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3425985d-d02a-4566-b1c3-ae15f48de2a0-node-mnt\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.365585 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3425985d-d02a-4566-b1c3-ae15f48de2a0-crc-storage\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.382904 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wn4z\" (UniqueName: \"kubernetes.io/projected/3425985d-d02a-4566-b1c3-ae15f48de2a0-kube-api-access-5wn4z\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.522001 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.791296 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-z77q6"] Feb 21 08:01:27 crc kubenswrapper[4820]: I0221 08:01:27.693996 4820 generic.go:334] "Generic (PLEG): container finished" podID="3425985d-d02a-4566-b1c3-ae15f48de2a0" containerID="12b4f7c9ed195d13c22453ee4884c11b5fecd881b33616d52b44ca9b032f4b29" exitCode=0 Feb 21 08:01:27 crc kubenswrapper[4820]: I0221 08:01:27.694037 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-z77q6" event={"ID":"3425985d-d02a-4566-b1c3-ae15f48de2a0","Type":"ContainerDied","Data":"12b4f7c9ed195d13c22453ee4884c11b5fecd881b33616d52b44ca9b032f4b29"} Feb 21 08:01:27 crc kubenswrapper[4820]: I0221 08:01:27.694646 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-z77q6" event={"ID":"3425985d-d02a-4566-b1c3-ae15f48de2a0","Type":"ContainerStarted","Data":"ecccc54d66bba4ecd6af298662d91b27db96c14ff355eb9a2b966fdb18fb43d8"} Feb 21 08:01:27 crc kubenswrapper[4820]: I0221 08:01:27.706570 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16aa1c55-d991-403c-afc5-a7b85d23c010" path="/var/lib/kubelet/pods/16aa1c55-d991-403c-afc5-a7b85d23c010/volumes" Feb 21 08:01:28 crc kubenswrapper[4820]: I0221 08:01:28.966347 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.099911 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wn4z\" (UniqueName: \"kubernetes.io/projected/3425985d-d02a-4566-b1c3-ae15f48de2a0-kube-api-access-5wn4z\") pod \"3425985d-d02a-4566-b1c3-ae15f48de2a0\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.100334 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3425985d-d02a-4566-b1c3-ae15f48de2a0-node-mnt\") pod \"3425985d-d02a-4566-b1c3-ae15f48de2a0\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.100422 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3425985d-d02a-4566-b1c3-ae15f48de2a0-crc-storage\") pod \"3425985d-d02a-4566-b1c3-ae15f48de2a0\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.100436 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3425985d-d02a-4566-b1c3-ae15f48de2a0-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "3425985d-d02a-4566-b1c3-ae15f48de2a0" (UID: "3425985d-d02a-4566-b1c3-ae15f48de2a0"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.100763 4820 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3425985d-d02a-4566-b1c3-ae15f48de2a0-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.105447 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3425985d-d02a-4566-b1c3-ae15f48de2a0-kube-api-access-5wn4z" (OuterVolumeSpecName: "kube-api-access-5wn4z") pod "3425985d-d02a-4566-b1c3-ae15f48de2a0" (UID: "3425985d-d02a-4566-b1c3-ae15f48de2a0"). InnerVolumeSpecName "kube-api-access-5wn4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.117164 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3425985d-d02a-4566-b1c3-ae15f48de2a0-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "3425985d-d02a-4566-b1c3-ae15f48de2a0" (UID: "3425985d-d02a-4566-b1c3-ae15f48de2a0"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.201874 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wn4z\" (UniqueName: \"kubernetes.io/projected/3425985d-d02a-4566-b1c3-ae15f48de2a0-kube-api-access-5wn4z\") on node \"crc\" DevicePath \"\"" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.201908 4820 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3425985d-d02a-4566-b1c3-ae15f48de2a0-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.706767 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-z77q6" event={"ID":"3425985d-d02a-4566-b1c3-ae15f48de2a0","Type":"ContainerDied","Data":"ecccc54d66bba4ecd6af298662d91b27db96c14ff355eb9a2b966fdb18fb43d8"} Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.706809 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.706813 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecccc54d66bba4ecd6af298662d91b27db96c14ff355eb9a2b966fdb18fb43d8" Feb 21 08:02:13 crc kubenswrapper[4820]: I0221 08:02:13.816203 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:02:13 crc kubenswrapper[4820]: I0221 08:02:13.816908 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:02:17 crc kubenswrapper[4820]: I0221 08:02:17.946635 4820 scope.go:117] "RemoveContainer" containerID="edb2f0d9506d60a67187b5d382cfd1305f456f91506d3822d04d40dbb03ad374" Feb 21 08:02:43 crc kubenswrapper[4820]: I0221 08:02:43.816974 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:02:43 crc kubenswrapper[4820]: I0221 08:02:43.819306 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:03:13 crc kubenswrapper[4820]: I0221 08:03:13.815835 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:03:13 crc kubenswrapper[4820]: I0221 08:03:13.816544 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:03:13 crc kubenswrapper[4820]: I0221 08:03:13.816588 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:03:13 crc kubenswrapper[4820]: I0221 08:03:13.817201 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:03:13 crc kubenswrapper[4820]: I0221 08:03:13.817286 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" gracePeriod=600 Feb 21 08:03:13 crc kubenswrapper[4820]: E0221 08:03:13.953832 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:03:14 crc kubenswrapper[4820]: I0221 08:03:14.455020 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" exitCode=0 Feb 21 08:03:14 crc kubenswrapper[4820]: I0221 08:03:14.455065 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83"} Feb 21 08:03:14 crc kubenswrapper[4820]: I0221 08:03:14.455106 4820 scope.go:117] "RemoveContainer" containerID="346b88f027dcad0ac2b4eeb21d59bcbf957c89a096d7ee7e9fc70f006fda192b" Feb 21 08:03:14 crc kubenswrapper[4820]: I0221 08:03:14.455608 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:03:14 crc kubenswrapper[4820]: E0221 08:03:14.455957 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:03:27 crc kubenswrapper[4820]: I0221 08:03:27.696761 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:03:27 crc kubenswrapper[4820]: E0221 08:03:27.697630 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:03:38 crc kubenswrapper[4820]: I0221 08:03:38.696384 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:03:38 crc kubenswrapper[4820]: E0221 08:03:38.696987 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:03:51 crc kubenswrapper[4820]: I0221 08:03:51.698583 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:03:51 crc kubenswrapper[4820]: E0221 08:03:51.699549 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.398095 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-774d9db845-q2fn4"] Feb 21 08:03:53 crc kubenswrapper[4820]: E0221 08:03:53.398427 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3425985d-d02a-4566-b1c3-ae15f48de2a0" containerName="storage" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.398441 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3425985d-d02a-4566-b1c3-ae15f48de2a0" containerName="storage" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.398558 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3425985d-d02a-4566-b1c3-ae15f48de2a0" containerName="storage" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.399227 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.404759 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.405183 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.405542 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.405705 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-vh5w4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.412368 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f67b98cb7-pjgtn"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.416747 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.425999 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.443094 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-774d9db845-q2fn4"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.453676 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f67b98cb7-pjgtn"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.469155 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-dns-svc\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.469228 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-config\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.469275 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3506e9-072b-4eef-afbb-95daa9d0a56d-config\") pod \"dnsmasq-dns-774d9db845-q2fn4\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.469310 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/5941b7b4-35ad-4016-b1bc-46b485dc8105-kube-api-access-x8j4w\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.469368 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fdpv\" (UniqueName: \"kubernetes.io/projected/6f3506e9-072b-4eef-afbb-95daa9d0a56d-kube-api-access-5fdpv\") pod \"dnsmasq-dns-774d9db845-q2fn4\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.570323 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/5941b7b4-35ad-4016-b1bc-46b485dc8105-kube-api-access-x8j4w\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.570406 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fdpv\" (UniqueName: \"kubernetes.io/projected/6f3506e9-072b-4eef-afbb-95daa9d0a56d-kube-api-access-5fdpv\") pod \"dnsmasq-dns-774d9db845-q2fn4\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.570445 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-dns-svc\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.570492 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-config\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.570519 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3506e9-072b-4eef-afbb-95daa9d0a56d-config\") pod \"dnsmasq-dns-774d9db845-q2fn4\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.571469 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3506e9-072b-4eef-afbb-95daa9d0a56d-config\") pod \"dnsmasq-dns-774d9db845-q2fn4\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.572648 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-dns-svc\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.573281 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-config\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.601039 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/5941b7b4-35ad-4016-b1bc-46b485dc8105-kube-api-access-x8j4w\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.604297 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fdpv\" (UniqueName: \"kubernetes.io/projected/6f3506e9-072b-4eef-afbb-95daa9d0a56d-kube-api-access-5fdpv\") pod \"dnsmasq-dns-774d9db845-q2fn4\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.631348 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-774d9db845-q2fn4"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.632043 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.656544 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-787c4dc9-p57fq"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.658487 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.671530 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f465k\" (UniqueName: \"kubernetes.io/projected/6637ce38-7cdd-4970-b22e-0762f51447f8-kube-api-access-f465k\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.671605 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-dns-svc\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.671661 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-config\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.675083 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-787c4dc9-p57fq"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.741592 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.773509 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f465k\" (UniqueName: \"kubernetes.io/projected/6637ce38-7cdd-4970-b22e-0762f51447f8-kube-api-access-f465k\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.773612 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-dns-svc\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.773666 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-config\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.774810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-config\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.775767 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-dns-svc\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.809492 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f465k\" (UniqueName: \"kubernetes.io/projected/6637ce38-7cdd-4970-b22e-0762f51447f8-kube-api-access-f465k\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.937337 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-787c4dc9-p57fq"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.937849 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.973077 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb88b7bf5-mdrlh"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.974560 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.005670 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb88b7bf5-mdrlh"] Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.078083 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-config\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.078181 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkxxn\" (UniqueName: \"kubernetes.io/projected/95cd39a3-df2b-4f19-bf18-d5fcf790995e-kube-api-access-xkxxn\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.079061 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-dns-svc\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.180568 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkxxn\" (UniqueName: \"kubernetes.io/projected/95cd39a3-df2b-4f19-bf18-d5fcf790995e-kube-api-access-xkxxn\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.180651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-dns-svc\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.180740 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-config\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.181737 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-config\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.181757 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-dns-svc\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.201257 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkxxn\" (UniqueName: \"kubernetes.io/projected/95cd39a3-df2b-4f19-bf18-d5fcf790995e-kube-api-access-xkxxn\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.203724 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-774d9db845-q2fn4"] Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.266413 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-787c4dc9-p57fq"] Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.305111 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.331096 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f67b98cb7-pjgtn"] Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.760025 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb88b7bf5-mdrlh"] Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.779045 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774d9db845-q2fn4" event={"ID":"6f3506e9-072b-4eef-afbb-95daa9d0a56d","Type":"ContainerStarted","Data":"d0f2ab4d624cd6284f03423f831cb678e5ef49c1e3adcba6f173f8f5e8d13c31"} Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.781454 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787c4dc9-p57fq" event={"ID":"6637ce38-7cdd-4970-b22e-0762f51447f8","Type":"ContainerStarted","Data":"017b50f9e39ade686cd66378baac106456851ad2545fb098c57598953b748fb6"} Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.782596 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" event={"ID":"5941b7b4-35ad-4016-b1bc-46b485dc8105","Type":"ContainerStarted","Data":"f48007f449d91c484371d5c8f69062d993cf3aaa4ef67c33803bc68750b06ebd"} Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.796383 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.797633 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.800694 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.801464 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.801536 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7r6cd" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.801606 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.801688 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.801792 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.803974 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.812013 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.991727 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.996577 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.996644 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.996677 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.996766 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.996862 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-config-data\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.996887 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.996921 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.997026 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8c69\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-kube-api-access-r8c69\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.997101 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.997177 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.086755 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.090112 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.091810 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.092029 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.092628 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.096734 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.097258 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-z7jtg" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.097417 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.097766 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098578 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098623 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098662 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098694 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098737 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098783 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098826 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-config-data\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098852 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098872 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098925 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8c69\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-kube-api-access-r8c69\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098964 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.103735 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.106600 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.106951 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.107662 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-config-data\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.107802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.108531 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.139182 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.139230 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/934de71409e5f275cb94cfa922d2597bbcc02a71598b29b6833fab6760155167/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.191660 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.192051 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.202803 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.202859 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.202899 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.202931 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.202950 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d51a301-b647-44f6-bd29-7db35420fa2c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.203149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.204129 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.204186 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhfzr\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-kube-api-access-mhfzr\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.204260 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.204280 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.204674 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d51a301-b647-44f6-bd29-7db35420fa2c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.210767 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.210883 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8c69\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-kube-api-access-r8c69\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.211573 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.248295 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.306752 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.306813 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhfzr\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-kube-api-access-mhfzr\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.306851 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.306875 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.306914 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d51a301-b647-44f6-bd29-7db35420fa2c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.306961 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.306992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.307033 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.307059 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d51a301-b647-44f6-bd29-7db35420fa2c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.307080 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.307128 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.307833 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.308063 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.308472 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.308653 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.310124 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.311813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.312649 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.312753 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eb4bbdf2b86e995ba706b4b62c0c402d7bc60ad53da33c49f02f1a8b30c7c64a/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.314382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d51a301-b647-44f6-bd29-7db35420fa2c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.315824 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d51a301-b647-44f6-bd29-7db35420fa2c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.330148 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhfzr\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-kube-api-access-mhfzr\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.335963 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.355814 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.438347 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.446131 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.816702 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" event={"ID":"95cd39a3-df2b-4f19-bf18-d5fcf790995e","Type":"ContainerStarted","Data":"c0fbba8abdf7dcc3b8fefeafbe554110877b155984d0717a8a1b1d9fb8c1f3ce"} Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.962385 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:03:55 crc kubenswrapper[4820]: W0221 08:03:55.979220 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1252400_6674_4a2e_a4ad_dc8f7fc45dee.slice/crio-e59ba11908d2427e77b8370ad9023c1d9f5c91e436080bbf10d9a6e9cb31d128 WatchSource:0}: Error finding container e59ba11908d2427e77b8370ad9023c1d9f5c91e436080bbf10d9a6e9cb31d128: Status 404 returned error can't find the container with id e59ba11908d2427e77b8370ad9023c1d9f5c91e436080bbf10d9a6e9cb31d128 Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.079368 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:03:56 crc kubenswrapper[4820]: W0221 08:03:56.087515 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d51a301_b647_44f6_bd29_7db35420fa2c.slice/crio-cf57958c059ab57096160a8511fc8c2747fdeefcad62b9b3daad83060dc8e5c3 WatchSource:0}: Error finding container cf57958c059ab57096160a8511fc8c2747fdeefcad62b9b3daad83060dc8e5c3: Status 404 returned error can't find the container with id cf57958c059ab57096160a8511fc8c2747fdeefcad62b9b3daad83060dc8e5c3 Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.102051 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.107593 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.110358 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.110475 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.110503 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-p6xzl" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.111945 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.115675 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.130366 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226096 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226153 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-config-data-default\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226181 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226310 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/21d2b3a6-8a28-4287-8953-23782681799a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226369 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-kolla-config\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226404 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d2b3a6-8a28-4287-8953-23782681799a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226490 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrh97\" (UniqueName: \"kubernetes.io/projected/21d2b3a6-8a28-4287-8953-23782681799a-kube-api-access-nrh97\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226533 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/21d2b3a6-8a28-4287-8953-23782681799a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.327911 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.328304 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/21d2b3a6-8a28-4287-8953-23782681799a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.328328 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-kolla-config\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.328361 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d2b3a6-8a28-4287-8953-23782681799a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.328385 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrh97\" (UniqueName: \"kubernetes.io/projected/21d2b3a6-8a28-4287-8953-23782681799a-kube-api-access-nrh97\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.328420 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/21d2b3a6-8a28-4287-8953-23782681799a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.328496 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.328528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-config-data-default\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.329938 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/21d2b3a6-8a28-4287-8953-23782681799a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.331217 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-config-data-default\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.331367 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-kolla-config\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.332008 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.334635 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.334669 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eec13e17780b306286e9ac1088ca2a300c26c16fb52d56a613cbee8d6a6cb356/globalmount\"" pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.335529 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/21d2b3a6-8a28-4287-8953-23782681799a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.335826 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d2b3a6-8a28-4287-8953-23782681799a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.346687 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrh97\" (UniqueName: \"kubernetes.io/projected/21d2b3a6-8a28-4287-8953-23782681799a-kube-api-access-nrh97\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.381809 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.445611 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.825060 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d51a301-b647-44f6-bd29-7db35420fa2c","Type":"ContainerStarted","Data":"cf57958c059ab57096160a8511fc8c2747fdeefcad62b9b3daad83060dc8e5c3"} Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.825999 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1252400-6674-4a2e-a4ad-dc8f7fc45dee","Type":"ContainerStarted","Data":"e59ba11908d2427e77b8370ad9023c1d9f5c91e436080bbf10d9a6e9cb31d128"} Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.555628 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 21 08:03:57 crc kubenswrapper[4820]: W0221 08:03:57.558826 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21d2b3a6_8a28_4287_8953_23782681799a.slice/crio-a814671671097a624cd72202261cc67fe503741f93e1c2a73e5529b3f757facf WatchSource:0}: Error finding container a814671671097a624cd72202261cc67fe503741f93e1c2a73e5529b3f757facf: Status 404 returned error can't find the container with id a814671671097a624cd72202261cc67fe503741f93e1c2a73e5529b3f757facf Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.718214 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.720164 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.723523 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.727579 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-8pfnn" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.727731 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.727803 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.727864 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.837642 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"21d2b3a6-8a28-4287-8953-23782681799a","Type":"ContainerStarted","Data":"a814671671097a624cd72202261cc67fe503741f93e1c2a73e5529b3f757facf"} Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865592 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865659 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865717 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865773 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865828 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865854 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bltzr\" (UniqueName: \"kubernetes.io/projected/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-kube-api-access-bltzr\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865891 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865962 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.967954 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.968051 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.968098 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.968155 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.968199 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.968576 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.968624 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bltzr\" (UniqueName: \"kubernetes.io/projected/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-kube-api-access-bltzr\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.968661 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.970024 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.970445 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.970826 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.971585 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.979002 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.979460 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8833ec5ee5aa131adee272449360987867638031830eb2bbc628affc6d67dded/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.982022 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.984727 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.000857 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bltzr\" (UniqueName: \"kubernetes.io/projected/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-kube-api-access-bltzr\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.035260 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.080740 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.081844 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.085467 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.087349 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.087430 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-v4mhk" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.114568 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.175433 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c039fd9-87df-497c-8e40-f9b5d2759d0f-kolla-config\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.175499 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c039fd9-87df-497c-8e40-f9b5d2759d0f-config-data\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.175634 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8lff\" (UniqueName: \"kubernetes.io/projected/4c039fd9-87df-497c-8e40-f9b5d2759d0f-kube-api-access-j8lff\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.175674 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c039fd9-87df-497c-8e40-f9b5d2759d0f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.175736 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c039fd9-87df-497c-8e40-f9b5d2759d0f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.277476 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c039fd9-87df-497c-8e40-f9b5d2759d0f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.277536 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c039fd9-87df-497c-8e40-f9b5d2759d0f-kolla-config\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.277558 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c039fd9-87df-497c-8e40-f9b5d2759d0f-config-data\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.277609 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8lff\" (UniqueName: \"kubernetes.io/projected/4c039fd9-87df-497c-8e40-f9b5d2759d0f-kube-api-access-j8lff\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.277644 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c039fd9-87df-497c-8e40-f9b5d2759d0f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.278874 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c039fd9-87df-497c-8e40-f9b5d2759d0f-config-data\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.282226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c039fd9-87df-497c-8e40-f9b5d2759d0f-kolla-config\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.283432 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c039fd9-87df-497c-8e40-f9b5d2759d0f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.283729 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c039fd9-87df-497c-8e40-f9b5d2759d0f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.292655 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8lff\" (UniqueName: \"kubernetes.io/projected/4c039fd9-87df-497c-8e40-f9b5d2759d0f-kube-api-access-j8lff\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.342794 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.402743 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.842607 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 08:03:58 crc kubenswrapper[4820]: W0221 08:03:58.858265 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0a14fdd_7df9_4cac_aa21_b4562f320fcc.slice/crio-f8f06ef502eb2b3d1a1442c8c96b3013d4bdf04d421b6578714442ab5fa3c2db WatchSource:0}: Error finding container f8f06ef502eb2b3d1a1442c8c96b3013d4bdf04d421b6578714442ab5fa3c2db: Status 404 returned error can't find the container with id f8f06ef502eb2b3d1a1442c8c96b3013d4bdf04d421b6578714442ab5fa3c2db Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.911938 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 21 08:03:58 crc kubenswrapper[4820]: W0221 08:03:58.917495 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c039fd9_87df_497c_8e40_f9b5d2759d0f.slice/crio-19c3153e5e1e140fc30da644f933b01ed4a61adb07a14acc1b9ffb1347101e89 WatchSource:0}: Error finding container 19c3153e5e1e140fc30da644f933b01ed4a61adb07a14acc1b9ffb1347101e89: Status 404 returned error can't find the container with id 19c3153e5e1e140fc30da644f933b01ed4a61adb07a14acc1b9ffb1347101e89 Feb 21 08:03:59 crc kubenswrapper[4820]: I0221 08:03:59.938679 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e0a14fdd-7df9-4cac-aa21-b4562f320fcc","Type":"ContainerStarted","Data":"f8f06ef502eb2b3d1a1442c8c96b3013d4bdf04d421b6578714442ab5fa3c2db"} Feb 21 08:03:59 crc kubenswrapper[4820]: I0221 08:03:59.948606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4c039fd9-87df-497c-8e40-f9b5d2759d0f","Type":"ContainerStarted","Data":"19c3153e5e1e140fc30da644f933b01ed4a61adb07a14acc1b9ffb1347101e89"} Feb 21 08:04:03 crc kubenswrapper[4820]: I0221 08:04:03.697499 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:04:03 crc kubenswrapper[4820]: E0221 08:04:03.698008 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:04:14 crc kubenswrapper[4820]: I0221 08:04:14.696767 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:04:14 crc kubenswrapper[4820]: E0221 08:04:14.697586 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.238275 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vm7mv"] Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.240535 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.252078 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vm7mv"] Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.390578 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v567r\" (UniqueName: \"kubernetes.io/projected/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-kube-api-access-v567r\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.390651 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-catalog-content\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.390865 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-utilities\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.492503 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v567r\" (UniqueName: \"kubernetes.io/projected/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-kube-api-access-v567r\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.492595 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-catalog-content\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.492645 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-utilities\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.493172 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-catalog-content\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.493190 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-utilities\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.690803 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v567r\" (UniqueName: \"kubernetes.io/projected/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-kube-api-access-v567r\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.863036 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.305868 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.306750 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.306916 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhfzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(3d51a301-b647-44f6-bd29-7db35420fa2c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.308400 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.368941 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.368988 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.369118 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8c69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(e1252400-6674-4a2e-a4ad-dc8f7fc45dee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.370495 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.406953 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.407021 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.407160 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8419493e1fd846703d277695e03fc5eb,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n564h564h676h699hcdh67bh66hfdh569h545h648h94h546h696h668h89h96h667h575h595h5d9h584h8dhbdh697h54bhb7h58fh5c9hd8h5cdh5c7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkxxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-bb88b7bf5-mdrlh_openstack(95cd39a3-df2b-4f19-bf18-d5fcf790995e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.408426 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.431647 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.432008 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.432212 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8419493e1fd846703d277695e03fc5eb,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n647h57bh695h68dh54fhf5hc5h67h5d4hb6h696h685h54ch6h599h5c5h679h74h689h644h5c8h64ch555h5c6h5dh569h698h59fh66ch57bh5b9hb7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fdpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-774d9db845-q2fn4_openstack(6f3506e9-072b-4eef-afbb-95daa9d0a56d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.433874 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-774d9db845-q2fn4" podUID="6f3506e9-072b-4eef-afbb-95daa9d0a56d" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.484904 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb\\\"\"" pod="openstack/rabbitmq-server-0" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.485225 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" Feb 21 08:04:24 crc kubenswrapper[4820]: I0221 08:04:24.828102 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vm7mv"] Feb 21 08:04:24 crc kubenswrapper[4820]: W0221 08:04:24.878830 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cfc6863_b2a0_4a8b_8445_d5bdc742e722.slice/crio-9ef816a06630805baa562657ff68966e145fde66b4685314138084555b4c9c6e WatchSource:0}: Error finding container 9ef816a06630805baa562657ff68966e145fde66b4685314138084555b4c9c6e: Status 404 returned error can't find the container with id 9ef816a06630805baa562657ff68966e145fde66b4685314138084555b4c9c6e Feb 21 08:04:24 crc kubenswrapper[4820]: I0221 08:04:24.879759 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:04:24 crc kubenswrapper[4820]: I0221 08:04:24.965816 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fdpv\" (UniqueName: \"kubernetes.io/projected/6f3506e9-072b-4eef-afbb-95daa9d0a56d-kube-api-access-5fdpv\") pod \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " Feb 21 08:04:24 crc kubenswrapper[4820]: I0221 08:04:24.965884 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3506e9-072b-4eef-afbb-95daa9d0a56d-config\") pod \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " Feb 21 08:04:24 crc kubenswrapper[4820]: I0221 08:04:24.966627 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f3506e9-072b-4eef-afbb-95daa9d0a56d-config" (OuterVolumeSpecName: "config") pod "6f3506e9-072b-4eef-afbb-95daa9d0a56d" (UID: "6f3506e9-072b-4eef-afbb-95daa9d0a56d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:04:24 crc kubenswrapper[4820]: I0221 08:04:24.973812 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f3506e9-072b-4eef-afbb-95daa9d0a56d-kube-api-access-5fdpv" (OuterVolumeSpecName: "kube-api-access-5fdpv") pod "6f3506e9-072b-4eef-afbb-95daa9d0a56d" (UID: "6f3506e9-072b-4eef-afbb-95daa9d0a56d"). InnerVolumeSpecName "kube-api-access-5fdpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.068025 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fdpv\" (UniqueName: \"kubernetes.io/projected/6f3506e9-072b-4eef-afbb-95daa9d0a56d-kube-api-access-5fdpv\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.068065 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3506e9-072b-4eef-afbb-95daa9d0a56d-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.491464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4c039fd9-87df-497c-8e40-f9b5d2759d0f","Type":"ContainerStarted","Data":"9525b66628ad454cf7d4346179fcfdaa2a8305dcd5202e40243ffa35d0570c8d"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.491544 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.493512 4820 generic.go:334] "Generic (PLEG): container finished" podID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerID="575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae" exitCode=0 Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.493587 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" event={"ID":"5941b7b4-35ad-4016-b1bc-46b485dc8105","Type":"ContainerDied","Data":"575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.497729 4820 generic.go:334] "Generic (PLEG): container finished" podID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerID="bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b" exitCode=0 Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.497773 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vm7mv" event={"ID":"3cfc6863-b2a0-4a8b-8445-d5bdc742e722","Type":"ContainerDied","Data":"bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.497815 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vm7mv" event={"ID":"3cfc6863-b2a0-4a8b-8445-d5bdc742e722","Type":"ContainerStarted","Data":"9ef816a06630805baa562657ff68966e145fde66b4685314138084555b4c9c6e"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.499002 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774d9db845-q2fn4" event={"ID":"6f3506e9-072b-4eef-afbb-95daa9d0a56d","Type":"ContainerDied","Data":"d0f2ab4d624cd6284f03423f831cb678e5ef49c1e3adcba6f173f8f5e8d13c31"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.499129 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.505332 4820 generic.go:334] "Generic (PLEG): container finished" podID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerID="34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed" exitCode=0 Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.505384 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" event={"ID":"95cd39a3-df2b-4f19-bf18-d5fcf790995e","Type":"ContainerDied","Data":"34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.508779 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e0a14fdd-7df9-4cac-aa21-b4562f320fcc","Type":"ContainerStarted","Data":"6f05812bb10b6474f161516b62d59eb3e07f9fe7d773d15e51ec5c4ca6610916"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.511102 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"21d2b3a6-8a28-4287-8953-23782681799a","Type":"ContainerStarted","Data":"e6b12272abf060e7bc7c4a5ba0d46a1b9858146a60229cfbceed9596a5eb6633"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.512716 4820 generic.go:334] "Generic (PLEG): container finished" podID="6637ce38-7cdd-4970-b22e-0762f51447f8" containerID="4ec48208d2bb76745c2ed2f718a34d9b29c4fe3273e329096ce615fb67617134" exitCode=0 Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.512747 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787c4dc9-p57fq" event={"ID":"6637ce38-7cdd-4970-b22e-0762f51447f8","Type":"ContainerDied","Data":"4ec48208d2bb76745c2ed2f718a34d9b29c4fe3273e329096ce615fb67617134"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.522643 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.079752077 podStartE2EDuration="27.522617363s" podCreationTimestamp="2026-02-21 08:03:58 +0000 UTC" firstStartedPulling="2026-02-21 08:03:58.919695962 +0000 UTC m=+4613.952780160" lastFinishedPulling="2026-02-21 08:04:24.362561248 +0000 UTC m=+4639.395645446" observedRunningTime="2026-02-21 08:04:25.514564555 +0000 UTC m=+4640.547648753" watchObservedRunningTime="2026-02-21 08:04:25.522617363 +0000 UTC m=+4640.555701561" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.698697 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:04:25 crc kubenswrapper[4820]: E0221 08:04:25.699339 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:04:25 crc kubenswrapper[4820]: E0221 08:04:25.735090 4820 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 21 08:04:25 crc kubenswrapper[4820]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/5941b7b4-35ad-4016-b1bc-46b485dc8105/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 21 08:04:25 crc kubenswrapper[4820]: > podSandboxID="f48007f449d91c484371d5c8f69062d993cf3aaa4ef67c33803bc68750b06ebd" Feb 21 08:04:25 crc kubenswrapper[4820]: E0221 08:04:25.735364 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 21 08:04:25 crc kubenswrapper[4820]: container &Container{Name:dnsmasq-dns,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8419493e1fd846703d277695e03fc5eb,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n697h54dhb7h666h69h76h59ch55ch65ch596h8h79h5c8h57hc8hfch5d7h697h79h698h5fch644hf9h54chbfh655hfchcbh5f8h646h5f7h89q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8j4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7f67b98cb7-pjgtn_openstack(5941b7b4-35ad-4016-b1bc-46b485dc8105): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/5941b7b4-35ad-4016-b1bc-46b485dc8105/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 21 08:04:25 crc kubenswrapper[4820]: > logger="UnhandledError" Feb 21 08:04:25 crc kubenswrapper[4820]: E0221 08:04:25.736536 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/5941b7b4-35ad-4016-b1bc-46b485dc8105/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.740334 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-774d9db845-q2fn4"] Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.768591 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-774d9db845-q2fn4"] Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.866958 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.992449 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-dns-svc\") pod \"6637ce38-7cdd-4970-b22e-0762f51447f8\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.992561 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-config\") pod \"6637ce38-7cdd-4970-b22e-0762f51447f8\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.992595 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f465k\" (UniqueName: \"kubernetes.io/projected/6637ce38-7cdd-4970-b22e-0762f51447f8-kube-api-access-f465k\") pod \"6637ce38-7cdd-4970-b22e-0762f51447f8\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.996516 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6637ce38-7cdd-4970-b22e-0762f51447f8-kube-api-access-f465k" (OuterVolumeSpecName: "kube-api-access-f465k") pod "6637ce38-7cdd-4970-b22e-0762f51447f8" (UID: "6637ce38-7cdd-4970-b22e-0762f51447f8"). InnerVolumeSpecName "kube-api-access-f465k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.009147 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6637ce38-7cdd-4970-b22e-0762f51447f8" (UID: "6637ce38-7cdd-4970-b22e-0762f51447f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.011537 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-config" (OuterVolumeSpecName: "config") pod "6637ce38-7cdd-4970-b22e-0762f51447f8" (UID: "6637ce38-7cdd-4970-b22e-0762f51447f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.094356 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.094391 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.094401 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f465k\" (UniqueName: \"kubernetes.io/projected/6637ce38-7cdd-4970-b22e-0762f51447f8-kube-api-access-f465k\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.520607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" event={"ID":"95cd39a3-df2b-4f19-bf18-d5fcf790995e","Type":"ContainerStarted","Data":"95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329"} Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.522187 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.523133 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787c4dc9-p57fq" event={"ID":"6637ce38-7cdd-4970-b22e-0762f51447f8","Type":"ContainerDied","Data":"017b50f9e39ade686cd66378baac106456851ad2545fb098c57598953b748fb6"} Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.523140 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.523171 4820 scope.go:117] "RemoveContainer" containerID="4ec48208d2bb76745c2ed2f718a34d9b29c4fe3273e329096ce615fb67617134" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.528868 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vm7mv" event={"ID":"3cfc6863-b2a0-4a8b-8445-d5bdc742e722","Type":"ContainerStarted","Data":"0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0"} Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.562041 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" podStartSLOduration=-9223372003.29276 podStartE2EDuration="33.562015907s" podCreationTimestamp="2026-02-21 08:03:53 +0000 UTC" firstStartedPulling="2026-02-21 08:03:54.77934964 +0000 UTC m=+4609.812433838" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:04:26.545822499 +0000 UTC m=+4641.578906697" watchObservedRunningTime="2026-02-21 08:04:26.562015907 +0000 UTC m=+4641.595100105" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.642092 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-787c4dc9-p57fq"] Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.647794 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-787c4dc9-p57fq"] Feb 21 08:04:27 crc kubenswrapper[4820]: I0221 08:04:27.537017 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" event={"ID":"5941b7b4-35ad-4016-b1bc-46b485dc8105","Type":"ContainerStarted","Data":"57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26"} Feb 21 08:04:27 crc kubenswrapper[4820]: I0221 08:04:27.537811 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:04:27 crc kubenswrapper[4820]: I0221 08:04:27.539305 4820 generic.go:334] "Generic (PLEG): container finished" podID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerID="0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0" exitCode=0 Feb 21 08:04:27 crc kubenswrapper[4820]: I0221 08:04:27.539388 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vm7mv" event={"ID":"3cfc6863-b2a0-4a8b-8445-d5bdc742e722","Type":"ContainerDied","Data":"0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0"} Feb 21 08:04:27 crc kubenswrapper[4820]: I0221 08:04:27.558853 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" podStartSLOduration=4.54615551 podStartE2EDuration="34.55883231s" podCreationTimestamp="2026-02-21 08:03:53 +0000 UTC" firstStartedPulling="2026-02-21 08:03:54.355374486 +0000 UTC m=+4609.388458694" lastFinishedPulling="2026-02-21 08:04:24.368051306 +0000 UTC m=+4639.401135494" observedRunningTime="2026-02-21 08:04:27.556721333 +0000 UTC m=+4642.589805541" watchObservedRunningTime="2026-02-21 08:04:27.55883231 +0000 UTC m=+4642.591916508" Feb 21 08:04:27 crc kubenswrapper[4820]: I0221 08:04:27.706115 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6637ce38-7cdd-4970-b22e-0762f51447f8" path="/var/lib/kubelet/pods/6637ce38-7cdd-4970-b22e-0762f51447f8/volumes" Feb 21 08:04:27 crc kubenswrapper[4820]: I0221 08:04:27.706862 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f3506e9-072b-4eef-afbb-95daa9d0a56d" path="/var/lib/kubelet/pods/6f3506e9-072b-4eef-afbb-95daa9d0a56d/volumes" Feb 21 08:04:28 crc kubenswrapper[4820]: I0221 08:04:28.548622 4820 generic.go:334] "Generic (PLEG): container finished" podID="e0a14fdd-7df9-4cac-aa21-b4562f320fcc" containerID="6f05812bb10b6474f161516b62d59eb3e07f9fe7d773d15e51ec5c4ca6610916" exitCode=0 Feb 21 08:04:28 crc kubenswrapper[4820]: I0221 08:04:28.548687 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e0a14fdd-7df9-4cac-aa21-b4562f320fcc","Type":"ContainerDied","Data":"6f05812bb10b6474f161516b62d59eb3e07f9fe7d773d15e51ec5c4ca6610916"} Feb 21 08:04:28 crc kubenswrapper[4820]: I0221 08:04:28.553009 4820 generic.go:334] "Generic (PLEG): container finished" podID="21d2b3a6-8a28-4287-8953-23782681799a" containerID="e6b12272abf060e7bc7c4a5ba0d46a1b9858146a60229cfbceed9596a5eb6633" exitCode=0 Feb 21 08:04:28 crc kubenswrapper[4820]: I0221 08:04:28.553075 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"21d2b3a6-8a28-4287-8953-23782681799a","Type":"ContainerDied","Data":"e6b12272abf060e7bc7c4a5ba0d46a1b9858146a60229cfbceed9596a5eb6633"} Feb 21 08:04:33 crc kubenswrapper[4820]: I0221 08:04:33.404301 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 21 08:04:33 crc kubenswrapper[4820]: I0221 08:04:33.743537 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.307396 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.377552 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f67b98cb7-pjgtn"] Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.596832 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vm7mv" event={"ID":"3cfc6863-b2a0-4a8b-8445-d5bdc742e722","Type":"ContainerStarted","Data":"d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d"} Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.600184 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e0a14fdd-7df9-4cac-aa21-b4562f320fcc","Type":"ContainerStarted","Data":"c202736b70c3c50812e2fb33af94d562a988d35bcd83dc4b9d88d0b245141ccf"} Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.605604 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerName="dnsmasq-dns" containerID="cri-o://57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26" gracePeriod=10 Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.605799 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"21d2b3a6-8a28-4287-8953-23782681799a","Type":"ContainerStarted","Data":"ce3abed658322c71c990f6ab5191ec01dfe91aad9111d477e6c24792ec8d9bf4"} Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.636607 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vm7mv" podStartSLOduration=9.917228195 podStartE2EDuration="18.636584966s" podCreationTimestamp="2026-02-21 08:04:16 +0000 UTC" firstStartedPulling="2026-02-21 08:04:25.502462578 +0000 UTC m=+4640.535546776" lastFinishedPulling="2026-02-21 08:04:34.221819349 +0000 UTC m=+4649.254903547" observedRunningTime="2026-02-21 08:04:34.622948087 +0000 UTC m=+4649.656032295" watchObservedRunningTime="2026-02-21 08:04:34.636584966 +0000 UTC m=+4649.669669164" Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.664519 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.161269163 podStartE2EDuration="38.66448788s" podCreationTimestamp="2026-02-21 08:03:56 +0000 UTC" firstStartedPulling="2026-02-21 08:03:58.862463835 +0000 UTC m=+4613.895548033" lastFinishedPulling="2026-02-21 08:04:24.365682552 +0000 UTC m=+4639.398766750" observedRunningTime="2026-02-21 08:04:34.661083517 +0000 UTC m=+4649.694167705" watchObservedRunningTime="2026-02-21 08:04:34.66448788 +0000 UTC m=+4649.697572078" Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.697611 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.840901586 podStartE2EDuration="39.697563783s" podCreationTimestamp="2026-02-21 08:03:55 +0000 UTC" firstStartedPulling="2026-02-21 08:03:57.560322742 +0000 UTC m=+4612.593406940" lastFinishedPulling="2026-02-21 08:04:24.416984939 +0000 UTC m=+4639.450069137" observedRunningTime="2026-02-21 08:04:34.695640651 +0000 UTC m=+4649.728724849" watchObservedRunningTime="2026-02-21 08:04:34.697563783 +0000 UTC m=+4649.730647981" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.025753 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.154805 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-config\") pod \"5941b7b4-35ad-4016-b1bc-46b485dc8105\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.154905 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/5941b7b4-35ad-4016-b1bc-46b485dc8105-kube-api-access-x8j4w\") pod \"5941b7b4-35ad-4016-b1bc-46b485dc8105\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.154952 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-dns-svc\") pod \"5941b7b4-35ad-4016-b1bc-46b485dc8105\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.172313 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5941b7b4-35ad-4016-b1bc-46b485dc8105-kube-api-access-x8j4w" (OuterVolumeSpecName: "kube-api-access-x8j4w") pod "5941b7b4-35ad-4016-b1bc-46b485dc8105" (UID: "5941b7b4-35ad-4016-b1bc-46b485dc8105"). InnerVolumeSpecName "kube-api-access-x8j4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.187979 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-config" (OuterVolumeSpecName: "config") pod "5941b7b4-35ad-4016-b1bc-46b485dc8105" (UID: "5941b7b4-35ad-4016-b1bc-46b485dc8105"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.201174 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5941b7b4-35ad-4016-b1bc-46b485dc8105" (UID: "5941b7b4-35ad-4016-b1bc-46b485dc8105"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.257026 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.257068 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/5941b7b4-35ad-4016-b1bc-46b485dc8105-kube-api-access-x8j4w\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.257082 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.614034 4820 generic.go:334] "Generic (PLEG): container finished" podID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerID="57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26" exitCode=0 Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.614087 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.614084 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" event={"ID":"5941b7b4-35ad-4016-b1bc-46b485dc8105","Type":"ContainerDied","Data":"57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26"} Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.614211 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" event={"ID":"5941b7b4-35ad-4016-b1bc-46b485dc8105","Type":"ContainerDied","Data":"f48007f449d91c484371d5c8f69062d993cf3aaa4ef67c33803bc68750b06ebd"} Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.614289 4820 scope.go:117] "RemoveContainer" containerID="57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.631871 4820 scope.go:117] "RemoveContainer" containerID="575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.658155 4820 scope.go:117] "RemoveContainer" containerID="57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.658486 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f67b98cb7-pjgtn"] Feb 21 08:04:35 crc kubenswrapper[4820]: E0221 08:04:35.658780 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26\": container with ID starting with 57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26 not found: ID does not exist" containerID="57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.658827 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26"} err="failed to get container status \"57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26\": rpc error: code = NotFound desc = could not find container \"57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26\": container with ID starting with 57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26 not found: ID does not exist" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.658852 4820 scope.go:117] "RemoveContainer" containerID="575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae" Feb 21 08:04:35 crc kubenswrapper[4820]: E0221 08:04:35.659200 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae\": container with ID starting with 575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae not found: ID does not exist" containerID="575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.659226 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae"} err="failed to get container status \"575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae\": rpc error: code = NotFound desc = could not find container \"575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae\": container with ID starting with 575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae not found: ID does not exist" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.665214 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f67b98cb7-pjgtn"] Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.708599 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" path="/var/lib/kubelet/pods/5941b7b4-35ad-4016-b1bc-46b485dc8105/volumes" Feb 21 08:04:36 crc kubenswrapper[4820]: I0221 08:04:36.675644 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 21 08:04:36 crc kubenswrapper[4820]: I0221 08:04:36.676746 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 21 08:04:36 crc kubenswrapper[4820]: I0221 08:04:36.697269 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:04:36 crc kubenswrapper[4820]: E0221 08:04:36.697496 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:04:36 crc kubenswrapper[4820]: I0221 08:04:36.864080 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:36 crc kubenswrapper[4820]: I0221 08:04:36.864143 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:37 crc kubenswrapper[4820]: I0221 08:04:37.909274 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vm7mv" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="registry-server" probeResult="failure" output=< Feb 21 08:04:37 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:04:37 crc kubenswrapper[4820]: > Feb 21 08:04:38 crc kubenswrapper[4820]: I0221 08:04:38.342989 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 21 08:04:38 crc kubenswrapper[4820]: I0221 08:04:38.343052 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 21 08:04:38 crc kubenswrapper[4820]: I0221 08:04:38.410963 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 21 08:04:38 crc kubenswrapper[4820]: I0221 08:04:38.778338 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 21 08:04:39 crc kubenswrapper[4820]: I0221 08:04:39.201579 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 21 08:04:39 crc kubenswrapper[4820]: I0221 08:04:39.276025 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 21 08:04:41 crc kubenswrapper[4820]: I0221 08:04:41.722995 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d51a301-b647-44f6-bd29-7db35420fa2c","Type":"ContainerStarted","Data":"ad4bf5f49615bc061cef8f25965d606a659e5e3641c1325e3aac53557e29705d"} Feb 21 08:04:41 crc kubenswrapper[4820]: I0221 08:04:41.724646 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1252400-6674-4a2e-a4ad-dc8f7fc45dee","Type":"ContainerStarted","Data":"e76872dbd8a799fbfe27270b370925e1ef8cf8b3b7eac3942e519639ee740a36"} Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.104426 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-v6p64"] Feb 21 08:04:45 crc kubenswrapper[4820]: E0221 08:04:45.104995 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerName="init" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.105007 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerName="init" Feb 21 08:04:45 crc kubenswrapper[4820]: E0221 08:04:45.105019 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerName="dnsmasq-dns" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.105026 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerName="dnsmasq-dns" Feb 21 08:04:45 crc kubenswrapper[4820]: E0221 08:04:45.105035 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6637ce38-7cdd-4970-b22e-0762f51447f8" containerName="init" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.105040 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6637ce38-7cdd-4970-b22e-0762f51447f8" containerName="init" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.105212 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerName="dnsmasq-dns" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.105229 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6637ce38-7cdd-4970-b22e-0762f51447f8" containerName="init" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.105810 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.109431 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.115845 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v6p64"] Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.141332 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041a8286-eca6-4595-8c96-dd70be516a57-operator-scripts\") pod \"root-account-create-update-v6p64\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.141420 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8jwg\" (UniqueName: \"kubernetes.io/projected/041a8286-eca6-4595-8c96-dd70be516a57-kube-api-access-v8jwg\") pod \"root-account-create-update-v6p64\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.243058 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8jwg\" (UniqueName: \"kubernetes.io/projected/041a8286-eca6-4595-8c96-dd70be516a57-kube-api-access-v8jwg\") pod \"root-account-create-update-v6p64\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.243460 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041a8286-eca6-4595-8c96-dd70be516a57-operator-scripts\") pod \"root-account-create-update-v6p64\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.244163 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041a8286-eca6-4595-8c96-dd70be516a57-operator-scripts\") pod \"root-account-create-update-v6p64\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.260849 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8jwg\" (UniqueName: \"kubernetes.io/projected/041a8286-eca6-4595-8c96-dd70be516a57-kube-api-access-v8jwg\") pod \"root-account-create-update-v6p64\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.458331 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.882880 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v6p64"] Feb 21 08:04:45 crc kubenswrapper[4820]: W0221 08:04:45.886992 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod041a8286_eca6_4595_8c96_dd70be516a57.slice/crio-6321e894ebcb1e05a3b9d8b953f80322e9687c8ca4fb7093f10b1e0f73e78b57 WatchSource:0}: Error finding container 6321e894ebcb1e05a3b9d8b953f80322e9687c8ca4fb7093f10b1e0f73e78b57: Status 404 returned error can't find the container with id 6321e894ebcb1e05a3b9d8b953f80322e9687c8ca4fb7093f10b1e0f73e78b57 Feb 21 08:04:46 crc kubenswrapper[4820]: I0221 08:04:46.760607 4820 generic.go:334] "Generic (PLEG): container finished" podID="041a8286-eca6-4595-8c96-dd70be516a57" containerID="eb27aecc6ecdd33121cbb1ef730b34658946fa8c269080b0841bca37cd76c02f" exitCode=0 Feb 21 08:04:46 crc kubenswrapper[4820]: I0221 08:04:46.760658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v6p64" event={"ID":"041a8286-eca6-4595-8c96-dd70be516a57","Type":"ContainerDied","Data":"eb27aecc6ecdd33121cbb1ef730b34658946fa8c269080b0841bca37cd76c02f"} Feb 21 08:04:46 crc kubenswrapper[4820]: I0221 08:04:46.760719 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v6p64" event={"ID":"041a8286-eca6-4595-8c96-dd70be516a57","Type":"ContainerStarted","Data":"6321e894ebcb1e05a3b9d8b953f80322e9687c8ca4fb7093f10b1e0f73e78b57"} Feb 21 08:04:46 crc kubenswrapper[4820]: I0221 08:04:46.919100 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:46 crc kubenswrapper[4820]: I0221 08:04:46.960660 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:47 crc kubenswrapper[4820]: I0221 08:04:47.434451 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vm7mv"] Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.006507 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.091249 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041a8286-eca6-4595-8c96-dd70be516a57-operator-scripts\") pod \"041a8286-eca6-4595-8c96-dd70be516a57\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.091296 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8jwg\" (UniqueName: \"kubernetes.io/projected/041a8286-eca6-4595-8c96-dd70be516a57-kube-api-access-v8jwg\") pod \"041a8286-eca6-4595-8c96-dd70be516a57\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.091679 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/041a8286-eca6-4595-8c96-dd70be516a57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "041a8286-eca6-4595-8c96-dd70be516a57" (UID: "041a8286-eca6-4595-8c96-dd70be516a57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.095225 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041a8286-eca6-4595-8c96-dd70be516a57-kube-api-access-v8jwg" (OuterVolumeSpecName: "kube-api-access-v8jwg") pod "041a8286-eca6-4595-8c96-dd70be516a57" (UID: "041a8286-eca6-4595-8c96-dd70be516a57"). InnerVolumeSpecName "kube-api-access-v8jwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.192516 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041a8286-eca6-4595-8c96-dd70be516a57-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.192742 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8jwg\" (UniqueName: \"kubernetes.io/projected/041a8286-eca6-4595-8c96-dd70be516a57-kube-api-access-v8jwg\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.697225 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:04:48 crc kubenswrapper[4820]: E0221 08:04:48.697469 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.774859 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v6p64" event={"ID":"041a8286-eca6-4595-8c96-dd70be516a57","Type":"ContainerDied","Data":"6321e894ebcb1e05a3b9d8b953f80322e9687c8ca4fb7093f10b1e0f73e78b57"} Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.774895 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.774904 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6321e894ebcb1e05a3b9d8b953f80322e9687c8ca4fb7093f10b1e0f73e78b57" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.775015 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vm7mv" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="registry-server" containerID="cri-o://d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d" gracePeriod=2 Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.145432 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.215669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-utilities\") pod \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.216318 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v567r\" (UniqueName: \"kubernetes.io/projected/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-kube-api-access-v567r\") pod \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.216379 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-catalog-content\") pod \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.216710 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-utilities" (OuterVolumeSpecName: "utilities") pod "3cfc6863-b2a0-4a8b-8445-d5bdc742e722" (UID: "3cfc6863-b2a0-4a8b-8445-d5bdc742e722"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.231488 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-kube-api-access-v567r" (OuterVolumeSpecName: "kube-api-access-v567r") pod "3cfc6863-b2a0-4a8b-8445-d5bdc742e722" (UID: "3cfc6863-b2a0-4a8b-8445-d5bdc742e722"). InnerVolumeSpecName "kube-api-access-v567r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.317775 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.317814 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v567r\" (UniqueName: \"kubernetes.io/projected/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-kube-api-access-v567r\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.339392 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cfc6863-b2a0-4a8b-8445-d5bdc742e722" (UID: "3cfc6863-b2a0-4a8b-8445-d5bdc742e722"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.418601 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.783731 4820 generic.go:334] "Generic (PLEG): container finished" podID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerID="d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d" exitCode=0 Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.783776 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vm7mv" event={"ID":"3cfc6863-b2a0-4a8b-8445-d5bdc742e722","Type":"ContainerDied","Data":"d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d"} Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.783808 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vm7mv" event={"ID":"3cfc6863-b2a0-4a8b-8445-d5bdc742e722","Type":"ContainerDied","Data":"9ef816a06630805baa562657ff68966e145fde66b4685314138084555b4c9c6e"} Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.783803 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.783888 4820 scope.go:117] "RemoveContainer" containerID="d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.804391 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vm7mv"] Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.807546 4820 scope.go:117] "RemoveContainer" containerID="0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.808276 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vm7mv"] Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.831437 4820 scope.go:117] "RemoveContainer" containerID="bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.850603 4820 scope.go:117] "RemoveContainer" containerID="d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d" Feb 21 08:04:49 crc kubenswrapper[4820]: E0221 08:04:49.853362 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d\": container with ID starting with d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d not found: ID does not exist" containerID="d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.853394 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d"} err="failed to get container status \"d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d\": rpc error: code = NotFound desc = could not find container \"d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d\": container with ID starting with d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d not found: ID does not exist" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.853416 4820 scope.go:117] "RemoveContainer" containerID="0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0" Feb 21 08:04:49 crc kubenswrapper[4820]: E0221 08:04:49.853786 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0\": container with ID starting with 0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0 not found: ID does not exist" containerID="0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.853841 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0"} err="failed to get container status \"0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0\": rpc error: code = NotFound desc = could not find container \"0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0\": container with ID starting with 0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0 not found: ID does not exist" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.853856 4820 scope.go:117] "RemoveContainer" containerID="bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b" Feb 21 08:04:49 crc kubenswrapper[4820]: E0221 08:04:49.854207 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b\": container with ID starting with bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b not found: ID does not exist" containerID="bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.854227 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b"} err="failed to get container status \"bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b\": rpc error: code = NotFound desc = could not find container \"bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b\": container with ID starting with bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b not found: ID does not exist" Feb 21 08:04:51 crc kubenswrapper[4820]: I0221 08:04:51.713912 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" path="/var/lib/kubelet/pods/3cfc6863-b2a0-4a8b-8445-d5bdc742e722/volumes" Feb 21 08:04:51 crc kubenswrapper[4820]: I0221 08:04:51.714965 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-v6p64"] Feb 21 08:04:51 crc kubenswrapper[4820]: I0221 08:04:51.723287 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-v6p64"] Feb 21 08:04:53 crc kubenswrapper[4820]: I0221 08:04:53.705989 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041a8286-eca6-4595-8c96-dd70be516a57" path="/var/lib/kubelet/pods/041a8286-eca6-4595-8c96-dd70be516a57/volumes" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.713987 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pnwbk"] Feb 21 08:04:56 crc kubenswrapper[4820]: E0221 08:04:56.714676 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="extract-utilities" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.714694 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="extract-utilities" Feb 21 08:04:56 crc kubenswrapper[4820]: E0221 08:04:56.714717 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041a8286-eca6-4595-8c96-dd70be516a57" containerName="mariadb-account-create-update" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.714724 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="041a8286-eca6-4595-8c96-dd70be516a57" containerName="mariadb-account-create-update" Feb 21 08:04:56 crc kubenswrapper[4820]: E0221 08:04:56.714738 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="extract-content" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.714744 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="extract-content" Feb 21 08:04:56 crc kubenswrapper[4820]: E0221 08:04:56.714760 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="registry-server" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.714768 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="registry-server" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.714937 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="registry-server" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.714949 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="041a8286-eca6-4595-8c96-dd70be516a57" containerName="mariadb-account-create-update" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.715526 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.717855 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.730039 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pnwbk"] Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.866307 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a608f92-6849-4847-9b75-495f1d27b9cf-operator-scripts\") pod \"root-account-create-update-pnwbk\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.866398 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62fg2\" (UniqueName: \"kubernetes.io/projected/5a608f92-6849-4847-9b75-495f1d27b9cf-kube-api-access-62fg2\") pod \"root-account-create-update-pnwbk\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.967747 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a608f92-6849-4847-9b75-495f1d27b9cf-operator-scripts\") pod \"root-account-create-update-pnwbk\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.967819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62fg2\" (UniqueName: \"kubernetes.io/projected/5a608f92-6849-4847-9b75-495f1d27b9cf-kube-api-access-62fg2\") pod \"root-account-create-update-pnwbk\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.968801 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a608f92-6849-4847-9b75-495f1d27b9cf-operator-scripts\") pod \"root-account-create-update-pnwbk\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.990471 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62fg2\" (UniqueName: \"kubernetes.io/projected/5a608f92-6849-4847-9b75-495f1d27b9cf-kube-api-access-62fg2\") pod \"root-account-create-update-pnwbk\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:57 crc kubenswrapper[4820]: I0221 08:04:57.072715 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:57 crc kubenswrapper[4820]: W0221 08:04:57.505516 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a608f92_6849_4847_9b75_495f1d27b9cf.slice/crio-c304521e78b415324320fa7c6d4d01af4d6a2c8ae5423dae1c6e1c4dc91d04c7 WatchSource:0}: Error finding container c304521e78b415324320fa7c6d4d01af4d6a2c8ae5423dae1c6e1c4dc91d04c7: Status 404 returned error can't find the container with id c304521e78b415324320fa7c6d4d01af4d6a2c8ae5423dae1c6e1c4dc91d04c7 Feb 21 08:04:57 crc kubenswrapper[4820]: I0221 08:04:57.506441 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pnwbk"] Feb 21 08:04:57 crc kubenswrapper[4820]: I0221 08:04:57.842775 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pnwbk" event={"ID":"5a608f92-6849-4847-9b75-495f1d27b9cf","Type":"ContainerStarted","Data":"c304521e78b415324320fa7c6d4d01af4d6a2c8ae5423dae1c6e1c4dc91d04c7"} Feb 21 08:04:58 crc kubenswrapper[4820]: I0221 08:04:58.856557 4820 generic.go:334] "Generic (PLEG): container finished" podID="5a608f92-6849-4847-9b75-495f1d27b9cf" containerID="3c304cff3e4ea891fe22f2e446f6db20e2204d9e270769a7f2bedb12df9f52ce" exitCode=0 Feb 21 08:04:58 crc kubenswrapper[4820]: I0221 08:04:58.856598 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pnwbk" event={"ID":"5a608f92-6849-4847-9b75-495f1d27b9cf","Type":"ContainerDied","Data":"3c304cff3e4ea891fe22f2e446f6db20e2204d9e270769a7f2bedb12df9f52ce"} Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.349614 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pnwbk" Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.526761 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a608f92-6849-4847-9b75-495f1d27b9cf-operator-scripts\") pod \"5a608f92-6849-4847-9b75-495f1d27b9cf\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.526900 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62fg2\" (UniqueName: \"kubernetes.io/projected/5a608f92-6849-4847-9b75-495f1d27b9cf-kube-api-access-62fg2\") pod \"5a608f92-6849-4847-9b75-495f1d27b9cf\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.527621 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a608f92-6849-4847-9b75-495f1d27b9cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a608f92-6849-4847-9b75-495f1d27b9cf" (UID: "5a608f92-6849-4847-9b75-495f1d27b9cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.531529 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a608f92-6849-4847-9b75-495f1d27b9cf-kube-api-access-62fg2" (OuterVolumeSpecName: "kube-api-access-62fg2") pod "5a608f92-6849-4847-9b75-495f1d27b9cf" (UID: "5a608f92-6849-4847-9b75-495f1d27b9cf"). InnerVolumeSpecName "kube-api-access-62fg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.629397 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62fg2\" (UniqueName: \"kubernetes.io/projected/5a608f92-6849-4847-9b75-495f1d27b9cf-kube-api-access-62fg2\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.629433 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a608f92-6849-4847-9b75-495f1d27b9cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.696572 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:05:00 crc kubenswrapper[4820]: E0221 08:05:00.696816 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.870159 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pnwbk" event={"ID":"5a608f92-6849-4847-9b75-495f1d27b9cf","Type":"ContainerDied","Data":"c304521e78b415324320fa7c6d4d01af4d6a2c8ae5423dae1c6e1c4dc91d04c7"} Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.870203 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c304521e78b415324320fa7c6d4d01af4d6a2c8ae5423dae1c6e1c4dc91d04c7" Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.870208 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pnwbk" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.259541 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wc562"] Feb 21 08:05:11 crc kubenswrapper[4820]: E0221 08:05:11.260307 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a608f92-6849-4847-9b75-495f1d27b9cf" containerName="mariadb-account-create-update" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.260318 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a608f92-6849-4847-9b75-495f1d27b9cf" containerName="mariadb-account-create-update" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.260473 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a608f92-6849-4847-9b75-495f1d27b9cf" containerName="mariadb-account-create-update" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.261553 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.274379 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc562"] Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.394120 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-utilities\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.394277 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-catalog-content\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.394705 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ccdr\" (UniqueName: \"kubernetes.io/projected/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-kube-api-access-2ccdr\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.496303 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-utilities\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.496383 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-catalog-content\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.496403 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ccdr\" (UniqueName: \"kubernetes.io/projected/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-kube-api-access-2ccdr\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.497235 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-utilities\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.497533 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-catalog-content\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.530603 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ccdr\" (UniqueName: \"kubernetes.io/projected/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-kube-api-access-2ccdr\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.593615 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:12 crc kubenswrapper[4820]: I0221 08:05:12.029100 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc562"] Feb 21 08:05:12 crc kubenswrapper[4820]: I0221 08:05:12.964933 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerID="006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564" exitCode=0 Feb 21 08:05:12 crc kubenswrapper[4820]: I0221 08:05:12.965007 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc562" event={"ID":"dc79f9d5-f05f-41ee-849f-1c29ec7b382a","Type":"ContainerDied","Data":"006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564"} Feb 21 08:05:12 crc kubenswrapper[4820]: I0221 08:05:12.965250 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc562" event={"ID":"dc79f9d5-f05f-41ee-849f-1c29ec7b382a","Type":"ContainerStarted","Data":"efd1e37053ca7b159083d52c5a4734be25b2c4ff60daf6987b203225c6e020f2"} Feb 21 08:05:13 crc kubenswrapper[4820]: I0221 08:05:13.697064 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:05:13 crc kubenswrapper[4820]: E0221 08:05:13.697593 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:05:13 crc kubenswrapper[4820]: I0221 08:05:13.975401 4820 generic.go:334] "Generic (PLEG): container finished" podID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerID="ad4bf5f49615bc061cef8f25965d606a659e5e3641c1325e3aac53557e29705d" exitCode=0 Feb 21 08:05:13 crc kubenswrapper[4820]: I0221 08:05:13.975474 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d51a301-b647-44f6-bd29-7db35420fa2c","Type":"ContainerDied","Data":"ad4bf5f49615bc061cef8f25965d606a659e5e3641c1325e3aac53557e29705d"} Feb 21 08:05:13 crc kubenswrapper[4820]: I0221 08:05:13.979222 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc562" event={"ID":"dc79f9d5-f05f-41ee-849f-1c29ec7b382a","Type":"ContainerStarted","Data":"3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db"} Feb 21 08:05:13 crc kubenswrapper[4820]: I0221 08:05:13.997130 4820 generic.go:334] "Generic (PLEG): container finished" podID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerID="e76872dbd8a799fbfe27270b370925e1ef8cf8b3b7eac3942e519639ee740a36" exitCode=0 Feb 21 08:05:13 crc kubenswrapper[4820]: I0221 08:05:13.997437 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1252400-6674-4a2e-a4ad-dc8f7fc45dee","Type":"ContainerDied","Data":"e76872dbd8a799fbfe27270b370925e1ef8cf8b3b7eac3942e519639ee740a36"} Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.005971 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d51a301-b647-44f6-bd29-7db35420fa2c","Type":"ContainerStarted","Data":"7b5b102bb5dc498916c31e2a47cabd7008dfcd32b5350d8d62ceae905597116f"} Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.006559 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.008466 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerID="3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db" exitCode=0 Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.008538 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc562" event={"ID":"dc79f9d5-f05f-41ee-849f-1c29ec7b382a","Type":"ContainerDied","Data":"3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db"} Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.010810 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1252400-6674-4a2e-a4ad-dc8f7fc45dee","Type":"ContainerStarted","Data":"9dc2ce0225a16a318c6ee9facefcbd79822bc86ff89fda565eef1d4e97b96848"} Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.011387 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.031679 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.203623955 podStartE2EDuration="1m21.03165861s" podCreationTimestamp="2026-02-21 08:03:54 +0000 UTC" firstStartedPulling="2026-02-21 08:03:56.091267798 +0000 UTC m=+4611.124351986" lastFinishedPulling="2026-02-21 08:04:39.919302443 +0000 UTC m=+4654.952386641" observedRunningTime="2026-02-21 08:05:15.031523676 +0000 UTC m=+4690.064607884" watchObservedRunningTime="2026-02-21 08:05:15.03165861 +0000 UTC m=+4690.064742808" Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.079695 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.145840415 podStartE2EDuration="1m22.079680388s" podCreationTimestamp="2026-02-21 08:03:53 +0000 UTC" firstStartedPulling="2026-02-21 08:03:55.984248857 +0000 UTC m=+4611.017333055" lastFinishedPulling="2026-02-21 08:04:39.91808883 +0000 UTC m=+4654.951173028" observedRunningTime="2026-02-21 08:05:15.076048539 +0000 UTC m=+4690.109132737" watchObservedRunningTime="2026-02-21 08:05:15.079680388 +0000 UTC m=+4690.112764586" Feb 21 08:05:17 crc kubenswrapper[4820]: I0221 08:05:17.029546 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc562" event={"ID":"dc79f9d5-f05f-41ee-849f-1c29ec7b382a","Type":"ContainerStarted","Data":"f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc"} Feb 21 08:05:17 crc kubenswrapper[4820]: I0221 08:05:17.061035 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wc562" podStartSLOduration=3.232238631 podStartE2EDuration="6.061020623s" podCreationTimestamp="2026-02-21 08:05:11 +0000 UTC" firstStartedPulling="2026-02-21 08:05:12.966283945 +0000 UTC m=+4687.999368143" lastFinishedPulling="2026-02-21 08:05:15.795065937 +0000 UTC m=+4690.828150135" observedRunningTime="2026-02-21 08:05:17.056001007 +0000 UTC m=+4692.089085195" watchObservedRunningTime="2026-02-21 08:05:17.061020623 +0000 UTC m=+4692.094104811" Feb 21 08:05:21 crc kubenswrapper[4820]: I0221 08:05:21.594882 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:21 crc kubenswrapper[4820]: I0221 08:05:21.595505 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:21 crc kubenswrapper[4820]: I0221 08:05:21.648353 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:22 crc kubenswrapper[4820]: I0221 08:05:22.105934 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:22 crc kubenswrapper[4820]: I0221 08:05:22.160596 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc562"] Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.072419 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wc562" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="registry-server" containerID="cri-o://f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc" gracePeriod=2 Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.600767 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.746650 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-utilities\") pod \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.746724 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ccdr\" (UniqueName: \"kubernetes.io/projected/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-kube-api-access-2ccdr\") pod \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.746763 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-catalog-content\") pod \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.747629 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-utilities" (OuterVolumeSpecName: "utilities") pod "dc79f9d5-f05f-41ee-849f-1c29ec7b382a" (UID: "dc79f9d5-f05f-41ee-849f-1c29ec7b382a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.751647 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-kube-api-access-2ccdr" (OuterVolumeSpecName: "kube-api-access-2ccdr") pod "dc79f9d5-f05f-41ee-849f-1c29ec7b382a" (UID: "dc79f9d5-f05f-41ee-849f-1c29ec7b382a"). InnerVolumeSpecName "kube-api-access-2ccdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.783020 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc79f9d5-f05f-41ee-849f-1c29ec7b382a" (UID: "dc79f9d5-f05f-41ee-849f-1c29ec7b382a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.849319 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.849370 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.849388 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ccdr\" (UniqueName: \"kubernetes.io/projected/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-kube-api-access-2ccdr\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.080921 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerID="f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc" exitCode=0 Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.080965 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc562" event={"ID":"dc79f9d5-f05f-41ee-849f-1c29ec7b382a","Type":"ContainerDied","Data":"f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc"} Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.080976 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.080995 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc562" event={"ID":"dc79f9d5-f05f-41ee-849f-1c29ec7b382a","Type":"ContainerDied","Data":"efd1e37053ca7b159083d52c5a4734be25b2c4ff60daf6987b203225c6e020f2"} Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.081011 4820 scope.go:117] "RemoveContainer" containerID="f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.099636 4820 scope.go:117] "RemoveContainer" containerID="3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.120921 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc562"] Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.121806 4820 scope.go:117] "RemoveContainer" containerID="006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.130347 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc562"] Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.169132 4820 scope.go:117] "RemoveContainer" containerID="f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc" Feb 21 08:05:25 crc kubenswrapper[4820]: E0221 08:05:25.169567 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc\": container with ID starting with f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc not found: ID does not exist" containerID="f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.169598 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc"} err="failed to get container status \"f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc\": rpc error: code = NotFound desc = could not find container \"f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc\": container with ID starting with f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc not found: ID does not exist" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.169619 4820 scope.go:117] "RemoveContainer" containerID="3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db" Feb 21 08:05:25 crc kubenswrapper[4820]: E0221 08:05:25.169936 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db\": container with ID starting with 3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db not found: ID does not exist" containerID="3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.170001 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db"} err="failed to get container status \"3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db\": rpc error: code = NotFound desc = could not find container \"3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db\": container with ID starting with 3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db not found: ID does not exist" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.170036 4820 scope.go:117] "RemoveContainer" containerID="006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564" Feb 21 08:05:25 crc kubenswrapper[4820]: E0221 08:05:25.170723 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564\": container with ID starting with 006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564 not found: ID does not exist" containerID="006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.170751 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564"} err="failed to get container status \"006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564\": rpc error: code = NotFound desc = could not find container \"006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564\": container with ID starting with 006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564 not found: ID does not exist" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.441440 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.448463 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.718829 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" path="/var/lib/kubelet/pods/dc79f9d5-f05f-41ee-849f-1c29ec7b382a/volumes" Feb 21 08:05:28 crc kubenswrapper[4820]: I0221 08:05:28.696263 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:05:28 crc kubenswrapper[4820]: E0221 08:05:28.696730 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.060936 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79496f79cc-zmwwz"] Feb 21 08:05:32 crc kubenswrapper[4820]: E0221 08:05:32.061751 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="extract-utilities" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.061765 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="extract-utilities" Feb 21 08:05:32 crc kubenswrapper[4820]: E0221 08:05:32.061790 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="registry-server" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.061796 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="registry-server" Feb 21 08:05:32 crc kubenswrapper[4820]: E0221 08:05:32.061807 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="extract-content" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.061813 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="extract-content" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.061940 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="registry-server" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.063169 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.077155 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79496f79cc-zmwwz"] Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.155029 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-config\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.155297 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvvn2\" (UniqueName: \"kubernetes.io/projected/8165e702-d96e-4273-8536-7e6e363482d4-kube-api-access-dvvn2\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.155399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-dns-svc\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.257189 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-config\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.257320 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvvn2\" (UniqueName: \"kubernetes.io/projected/8165e702-d96e-4273-8536-7e6e363482d4-kube-api-access-dvvn2\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.257377 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-dns-svc\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.258275 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-dns-svc\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.258298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-config\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.283858 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvvn2\" (UniqueName: \"kubernetes.io/projected/8165e702-d96e-4273-8536-7e6e363482d4-kube-api-access-dvvn2\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.387855 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:33 crc kubenswrapper[4820]: I0221 08:05:32.679614 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:05:33 crc kubenswrapper[4820]: I0221 08:05:32.813853 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79496f79cc-zmwwz"] Feb 21 08:05:33 crc kubenswrapper[4820]: W0221 08:05:32.815202 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8165e702_d96e_4273_8536_7e6e363482d4.slice/crio-41d8b8dbadec17fbbc4f67602cdb951273b7c33c0b12dcd66df04c7b23b9452c WatchSource:0}: Error finding container 41d8b8dbadec17fbbc4f67602cdb951273b7c33c0b12dcd66df04c7b23b9452c: Status 404 returned error can't find the container with id 41d8b8dbadec17fbbc4f67602cdb951273b7c33c0b12dcd66df04c7b23b9452c Feb 21 08:05:33 crc kubenswrapper[4820]: I0221 08:05:33.137051 4820 generic.go:334] "Generic (PLEG): container finished" podID="8165e702-d96e-4273-8536-7e6e363482d4" containerID="e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a" exitCode=0 Feb 21 08:05:33 crc kubenswrapper[4820]: I0221 08:05:33.137089 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" event={"ID":"8165e702-d96e-4273-8536-7e6e363482d4","Type":"ContainerDied","Data":"e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a"} Feb 21 08:05:33 crc kubenswrapper[4820]: I0221 08:05:33.137113 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" event={"ID":"8165e702-d96e-4273-8536-7e6e363482d4","Type":"ContainerStarted","Data":"41d8b8dbadec17fbbc4f67602cdb951273b7c33c0b12dcd66df04c7b23b9452c"} Feb 21 08:05:33 crc kubenswrapper[4820]: I0221 08:05:33.264086 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:05:34 crc kubenswrapper[4820]: I0221 08:05:34.145664 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" event={"ID":"8165e702-d96e-4273-8536-7e6e363482d4","Type":"ContainerStarted","Data":"6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62"} Feb 21 08:05:34 crc kubenswrapper[4820]: I0221 08:05:34.145973 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:34 crc kubenswrapper[4820]: I0221 08:05:34.172459 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" podStartSLOduration=2.172439757 podStartE2EDuration="2.172439757s" podCreationTimestamp="2026-02-21 08:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:05:34.167883114 +0000 UTC m=+4709.200967322" watchObservedRunningTime="2026-02-21 08:05:34.172439757 +0000 UTC m=+4709.205523955" Feb 21 08:05:36 crc kubenswrapper[4820]: I0221 08:05:36.874656 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerName="rabbitmq" containerID="cri-o://9dc2ce0225a16a318c6ee9facefcbd79822bc86ff89fda565eef1d4e97b96848" gracePeriod=604796 Feb 21 08:05:37 crc kubenswrapper[4820]: I0221 08:05:37.778179 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerName="rabbitmq" containerID="cri-o://7b5b102bb5dc498916c31e2a47cabd7008dfcd32b5350d8d62ceae905597116f" gracePeriod=604796 Feb 21 08:05:40 crc kubenswrapper[4820]: I0221 08:05:40.697539 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:05:40 crc kubenswrapper[4820]: E0221 08:05:40.697994 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.389452 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.464231 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb88b7bf5-mdrlh"] Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.464516 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerName="dnsmasq-dns" containerID="cri-o://95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329" gracePeriod=10 Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.895947 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.971516 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkxxn\" (UniqueName: \"kubernetes.io/projected/95cd39a3-df2b-4f19-bf18-d5fcf790995e-kube-api-access-xkxxn\") pod \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.971569 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-config\") pod \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.971635 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-dns-svc\") pod \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.979504 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95cd39a3-df2b-4f19-bf18-d5fcf790995e-kube-api-access-xkxxn" (OuterVolumeSpecName: "kube-api-access-xkxxn") pod "95cd39a3-df2b-4f19-bf18-d5fcf790995e" (UID: "95cd39a3-df2b-4f19-bf18-d5fcf790995e"). InnerVolumeSpecName "kube-api-access-xkxxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.005900 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95cd39a3-df2b-4f19-bf18-d5fcf790995e" (UID: "95cd39a3-df2b-4f19-bf18-d5fcf790995e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.008347 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-config" (OuterVolumeSpecName: "config") pod "95cd39a3-df2b-4f19-bf18-d5fcf790995e" (UID: "95cd39a3-df2b-4f19-bf18-d5fcf790995e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.073132 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkxxn\" (UniqueName: \"kubernetes.io/projected/95cd39a3-df2b-4f19-bf18-d5fcf790995e-kube-api-access-xkxxn\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.073192 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.073205 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.211119 4820 generic.go:334] "Generic (PLEG): container finished" podID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerID="95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329" exitCode=0 Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.211403 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.211308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" event={"ID":"95cd39a3-df2b-4f19-bf18-d5fcf790995e","Type":"ContainerDied","Data":"95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329"} Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.211590 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" event={"ID":"95cd39a3-df2b-4f19-bf18-d5fcf790995e","Type":"ContainerDied","Data":"c0fbba8abdf7dcc3b8fefeafbe554110877b155984d0717a8a1b1d9fb8c1f3ce"} Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.211710 4820 scope.go:117] "RemoveContainer" containerID="95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.217068 4820 generic.go:334] "Generic (PLEG): container finished" podID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerID="9dc2ce0225a16a318c6ee9facefcbd79822bc86ff89fda565eef1d4e97b96848" exitCode=0 Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.217156 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1252400-6674-4a2e-a4ad-dc8f7fc45dee","Type":"ContainerDied","Data":"9dc2ce0225a16a318c6ee9facefcbd79822bc86ff89fda565eef1d4e97b96848"} Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.245136 4820 scope.go:117] "RemoveContainer" containerID="34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.253538 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb88b7bf5-mdrlh"] Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.268262 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bb88b7bf5-mdrlh"] Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.270839 4820 scope.go:117] "RemoveContainer" containerID="95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329" Feb 21 08:05:43 crc kubenswrapper[4820]: E0221 08:05:43.271680 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329\": container with ID starting with 95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329 not found: ID does not exist" containerID="95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.271729 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329"} err="failed to get container status \"95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329\": rpc error: code = NotFound desc = could not find container \"95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329\": container with ID starting with 95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329 not found: ID does not exist" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.271760 4820 scope.go:117] "RemoveContainer" containerID="34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed" Feb 21 08:05:43 crc kubenswrapper[4820]: E0221 08:05:43.272110 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed\": container with ID starting with 34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed not found: ID does not exist" containerID="34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.272151 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed"} err="failed to get container status \"34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed\": rpc error: code = NotFound desc = could not find container \"34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed\": container with ID starting with 34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed not found: ID does not exist" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.370573 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376319 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-pod-info\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376375 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8c69\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-kube-api-access-r8c69\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376409 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-confd\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376458 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-erlang-cookie-secret\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376484 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-plugins\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376538 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-server-conf\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376586 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-plugins-conf\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376606 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-tls\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376755 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376777 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-config-data\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376824 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-erlang-cookie\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.377104 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.379379 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.379411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-pod-info" (OuterVolumeSpecName: "pod-info") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.379885 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.379996 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-kube-api-access-r8c69" (OuterVolumeSpecName: "kube-api-access-r8c69") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "kube-api-access-r8c69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.381023 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.382320 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.408101 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb" (OuterVolumeSpecName: "persistence") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.421460 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-config-data" (OuterVolumeSpecName: "config-data") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.437023 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-server-conf" (OuterVolumeSpecName: "server-conf") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.464679 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478102 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478144 4820 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-pod-info\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478156 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8c69\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-kube-api-access-r8c69\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478164 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478172 4820 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478181 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478188 4820 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-server-conf\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478195 4820 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478203 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478258 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") on node \"crc\" " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478271 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.492889 4820 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.493098 4820 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb") on node "crc" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.580028 4820 reconciler_common.go:293] "Volume detached for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.707026 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" path="/var/lib/kubelet/pods/95cd39a3-df2b-4f19-bf18-d5fcf790995e/volumes" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.230098 4820 generic.go:334] "Generic (PLEG): container finished" podID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerID="7b5b102bb5dc498916c31e2a47cabd7008dfcd32b5350d8d62ceae905597116f" exitCode=0 Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.230145 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d51a301-b647-44f6-bd29-7db35420fa2c","Type":"ContainerDied","Data":"7b5b102bb5dc498916c31e2a47cabd7008dfcd32b5350d8d62ceae905597116f"} Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.231405 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1252400-6674-4a2e-a4ad-dc8f7fc45dee","Type":"ContainerDied","Data":"e59ba11908d2427e77b8370ad9023c1d9f5c91e436080bbf10d9a6e9cb31d128"} Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.231437 4820 scope.go:117] "RemoveContainer" containerID="9dc2ce0225a16a318c6ee9facefcbd79822bc86ff89fda565eef1d4e97b96848" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.231593 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.308477 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.324144 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.324491 4820 scope.go:117] "RemoveContainer" containerID="e76872dbd8a799fbfe27270b370925e1ef8cf8b3b7eac3942e519639ee740a36" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.333164 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.372651 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:05:44 crc kubenswrapper[4820]: E0221 08:05:44.373117 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerName="init" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373144 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerName="init" Feb 21 08:05:44 crc kubenswrapper[4820]: E0221 08:05:44.373170 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerName="setup-container" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373181 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerName="setup-container" Feb 21 08:05:44 crc kubenswrapper[4820]: E0221 08:05:44.373200 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerName="rabbitmq" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373210 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerName="rabbitmq" Feb 21 08:05:44 crc kubenswrapper[4820]: E0221 08:05:44.373232 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerName="setup-container" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373308 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerName="setup-container" Feb 21 08:05:44 crc kubenswrapper[4820]: E0221 08:05:44.373351 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerName="dnsmasq-dns" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373373 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerName="dnsmasq-dns" Feb 21 08:05:44 crc kubenswrapper[4820]: E0221 08:05:44.373387 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerName="rabbitmq" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373397 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerName="rabbitmq" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373654 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerName="rabbitmq" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373676 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerName="rabbitmq" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373697 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerName="dnsmasq-dns" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.375112 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.382266 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.382554 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.382756 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.382772 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.382955 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.383059 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7r6cd" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.383118 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399148 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-config-data\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399334 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399362 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8195e98f-70c8-4758-9d0a-e3a95de45075-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399389 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8195e98f-70c8-4758-9d0a-e3a95de45075-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399417 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399435 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399477 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399506 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnrgm\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-kube-api-access-dnrgm\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399541 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399566 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399639 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.401621 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500348 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500444 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhfzr\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-kube-api-access-mhfzr\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500467 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d51a301-b647-44f6-bd29-7db35420fa2c-pod-info\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500499 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-tls\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500526 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d51a301-b647-44f6-bd29-7db35420fa2c-erlang-cookie-secret\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500548 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-server-conf\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500608 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-plugins\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500625 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-plugins-conf\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500653 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-config-data\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500670 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-erlang-cookie\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500686 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-confd\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500853 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8195e98f-70c8-4758-9d0a-e3a95de45075-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500884 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500902 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500929 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500946 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnrgm\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-kube-api-access-dnrgm\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500971 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.501024 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.501048 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-config-data\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.501098 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.501116 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8195e98f-70c8-4758-9d0a-e3a95de45075-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.501568 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.501930 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.502063 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.502964 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.503040 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-config-data\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.503804 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.503941 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.505012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.507465 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d51a301-b647-44f6-bd29-7db35420fa2c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.507491 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3d51a301-b647-44f6-bd29-7db35420fa2c-pod-info" (OuterVolumeSpecName: "pod-info") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.507568 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.507969 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-kube-api-access-mhfzr" (OuterVolumeSpecName: "kube-api-access-mhfzr") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "kube-api-access-mhfzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.508145 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.508303 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8195e98f-70c8-4758-9d0a-e3a95de45075-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.508431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.510736 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8195e98f-70c8-4758-9d0a-e3a95de45075-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.522575 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-config-data" (OuterVolumeSpecName: "config-data") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.522607 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnrgm\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-kube-api-access-dnrgm\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.522703 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.522738 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/934de71409e5f275cb94cfa922d2597bbcc02a71598b29b6833fab6760155167/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.525547 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2" (OuterVolumeSpecName: "persistence") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "pvc-84000e1c-c116-40f4-8806-c604396f3af2". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.552854 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-server-conf" (OuterVolumeSpecName: "server-conf") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.557644 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.593944 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602773 4820 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-server-conf\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602802 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602813 4820 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602824 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602833 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602842 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602863 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") on node \"crc\" " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602872 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhfzr\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-kube-api-access-mhfzr\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602888 4820 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d51a301-b647-44f6-bd29-7db35420fa2c-pod-info\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602896 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602904 4820 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d51a301-b647-44f6-bd29-7db35420fa2c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.617878 4820 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.618072 4820 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-84000e1c-c116-40f4-8806-c604396f3af2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2") on node "crc" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.704210 4820 reconciler_common.go:293] "Volume detached for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.708755 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.130127 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.238538 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8195e98f-70c8-4758-9d0a-e3a95de45075","Type":"ContainerStarted","Data":"12e981471e18de239f46c75b4371041708ca1be059f57feb9a45c0ba679ff1ca"} Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.240349 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d51a301-b647-44f6-bd29-7db35420fa2c","Type":"ContainerDied","Data":"cf57958c059ab57096160a8511fc8c2747fdeefcad62b9b3daad83060dc8e5c3"} Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.240410 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.240415 4820 scope.go:117] "RemoveContainer" containerID="7b5b102bb5dc498916c31e2a47cabd7008dfcd32b5350d8d62ceae905597116f" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.270075 4820 scope.go:117] "RemoveContainer" containerID="ad4bf5f49615bc061cef8f25965d606a659e5e3641c1325e3aac53557e29705d" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.279307 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.285515 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.295771 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.297116 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.298995 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.302860 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.303259 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.303457 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-z7jtg" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.303575 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.304325 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.304725 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.311222 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415006 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415049 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415093 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415113 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415132 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415155 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sn4j\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-kube-api-access-7sn4j\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415209 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415228 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415274 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415317 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.516600 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.517750 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.518961 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.518986 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eb4bbdf2b86e995ba706b4b62c0c402d7bc60ad53da33c49f02f1a8b30c7c64a/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.519930 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.517782 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.519992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sn4j\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-kube-api-access-7sn4j\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520082 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520114 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520134 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520178 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520200 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520771 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520926 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520976 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520998 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.521503 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.522122 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.523503 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.523606 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.524087 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.524972 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.543849 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sn4j\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-kube-api-access-7sn4j\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.554363 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.635630 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.714616 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" path="/var/lib/kubelet/pods/3d51a301-b647-44f6-bd29-7db35420fa2c/volumes" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.715938 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" path="/var/lib/kubelet/pods/e1252400-6674-4a2e-a4ad-dc8f7fc45dee/volumes" Feb 21 08:05:46 crc kubenswrapper[4820]: I0221 08:05:46.091211 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:05:46 crc kubenswrapper[4820]: I0221 08:05:46.251578 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d","Type":"ContainerStarted","Data":"aa03e2c65aae9f5463c21f0cd927ddaa99517136c5633ab70c3b64a8481658c8"} Feb 21 08:05:47 crc kubenswrapper[4820]: I0221 08:05:47.263955 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8195e98f-70c8-4758-9d0a-e3a95de45075","Type":"ContainerStarted","Data":"e4ced02ea21f015b77839446433655fb65512cfbdf70f5e3b21abb0747f5babd"} Feb 21 08:05:48 crc kubenswrapper[4820]: I0221 08:05:48.272516 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d","Type":"ContainerStarted","Data":"eb6b6434380a3c811bc2de3820828054bcc1c76954ea9e4e5fa1b5301448681d"} Feb 21 08:05:53 crc kubenswrapper[4820]: I0221 08:05:53.696763 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:05:53 crc kubenswrapper[4820]: E0221 08:05:53.698367 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:06:05 crc kubenswrapper[4820]: I0221 08:06:05.702711 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:06:05 crc kubenswrapper[4820]: E0221 08:06:05.703578 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:06:17 crc kubenswrapper[4820]: I0221 08:06:17.697052 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:06:17 crc kubenswrapper[4820]: E0221 08:06:17.697887 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:06:19 crc kubenswrapper[4820]: I0221 08:06:19.517539 4820 generic.go:334] "Generic (PLEG): container finished" podID="57d094d7-d5d2-4276-b0c2-cb98a15c0c3d" containerID="eb6b6434380a3c811bc2de3820828054bcc1c76954ea9e4e5fa1b5301448681d" exitCode=0 Feb 21 08:06:19 crc kubenswrapper[4820]: I0221 08:06:19.517599 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d","Type":"ContainerDied","Data":"eb6b6434380a3c811bc2de3820828054bcc1c76954ea9e4e5fa1b5301448681d"} Feb 21 08:06:19 crc kubenswrapper[4820]: I0221 08:06:19.519289 4820 generic.go:334] "Generic (PLEG): container finished" podID="8195e98f-70c8-4758-9d0a-e3a95de45075" containerID="e4ced02ea21f015b77839446433655fb65512cfbdf70f5e3b21abb0747f5babd" exitCode=0 Feb 21 08:06:19 crc kubenswrapper[4820]: I0221 08:06:19.519321 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8195e98f-70c8-4758-9d0a-e3a95de45075","Type":"ContainerDied","Data":"e4ced02ea21f015b77839446433655fb65512cfbdf70f5e3b21abb0747f5babd"} Feb 21 08:06:20 crc kubenswrapper[4820]: I0221 08:06:20.527841 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8195e98f-70c8-4758-9d0a-e3a95de45075","Type":"ContainerStarted","Data":"da87fcd901c3ff0404185ecadc46ae54cb6f88c3fb2093d35d1985eeabbc62a7"} Feb 21 08:06:20 crc kubenswrapper[4820]: I0221 08:06:20.528626 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 21 08:06:20 crc kubenswrapper[4820]: I0221 08:06:20.529222 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d","Type":"ContainerStarted","Data":"1475dfb713358b3c39a4e549c4134731da0735fb3d439b6da007f0e445f57e7e"} Feb 21 08:06:20 crc kubenswrapper[4820]: I0221 08:06:20.529493 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:06:20 crc kubenswrapper[4820]: I0221 08:06:20.555548 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.555529606 podStartE2EDuration="36.555529606s" podCreationTimestamp="2026-02-21 08:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:06:20.547762115 +0000 UTC m=+4755.580846313" watchObservedRunningTime="2026-02-21 08:06:20.555529606 +0000 UTC m=+4755.588613804" Feb 21 08:06:20 crc kubenswrapper[4820]: I0221 08:06:20.587352 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.587332294 podStartE2EDuration="35.587332294s" podCreationTimestamp="2026-02-21 08:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:06:20.583396257 +0000 UTC m=+4755.616480475" watchObservedRunningTime="2026-02-21 08:06:20.587332294 +0000 UTC m=+4755.620416492" Feb 21 08:06:30 crc kubenswrapper[4820]: I0221 08:06:30.697418 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:06:30 crc kubenswrapper[4820]: E0221 08:06:30.698306 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:06:34 crc kubenswrapper[4820]: I0221 08:06:34.712445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 21 08:06:35 crc kubenswrapper[4820]: I0221 08:06:35.638445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.239420 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.240525 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.243073 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ccs7x" Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.249429 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.368753 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpqqm\" (UniqueName: \"kubernetes.io/projected/b79bec21-0f86-4055-b9f6-09e36fca39d7-kube-api-access-gpqqm\") pod \"mariadb-client\" (UID: \"b79bec21-0f86-4055-b9f6-09e36fca39d7\") " pod="openstack/mariadb-client" Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.470452 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpqqm\" (UniqueName: \"kubernetes.io/projected/b79bec21-0f86-4055-b9f6-09e36fca39d7-kube-api-access-gpqqm\") pod \"mariadb-client\" (UID: \"b79bec21-0f86-4055-b9f6-09e36fca39d7\") " pod="openstack/mariadb-client" Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.493607 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpqqm\" (UniqueName: \"kubernetes.io/projected/b79bec21-0f86-4055-b9f6-09e36fca39d7-kube-api-access-gpqqm\") pod \"mariadb-client\" (UID: \"b79bec21-0f86-4055-b9f6-09e36fca39d7\") " pod="openstack/mariadb-client" Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.577273 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:06:39 crc kubenswrapper[4820]: I0221 08:06:39.062085 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:06:39 crc kubenswrapper[4820]: I0221 08:06:39.073182 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:06:39 crc kubenswrapper[4820]: I0221 08:06:39.660916 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b79bec21-0f86-4055-b9f6-09e36fca39d7","Type":"ContainerStarted","Data":"2c64262dba04eb03748f3b009a919554604fe4b3b0a2b587e5806fe9484db531"} Feb 21 08:06:40 crc kubenswrapper[4820]: I0221 08:06:40.669659 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b79bec21-0f86-4055-b9f6-09e36fca39d7","Type":"ContainerStarted","Data":"ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876"} Feb 21 08:06:40 crc kubenswrapper[4820]: I0221 08:06:40.686593 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.215972114 podStartE2EDuration="2.686564979s" podCreationTimestamp="2026-02-21 08:06:38 +0000 UTC" firstStartedPulling="2026-02-21 08:06:39.072957091 +0000 UTC m=+4774.106041289" lastFinishedPulling="2026-02-21 08:06:39.543549916 +0000 UTC m=+4774.576634154" observedRunningTime="2026-02-21 08:06:40.68397483 +0000 UTC m=+4775.717059068" watchObservedRunningTime="2026-02-21 08:06:40.686564979 +0000 UTC m=+4775.719649197" Feb 21 08:06:41 crc kubenswrapper[4820]: I0221 08:06:41.697387 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:06:41 crc kubenswrapper[4820]: E0221 08:06:41.697611 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.153090 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.153914 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="b79bec21-0f86-4055-b9f6-09e36fca39d7" containerName="mariadb-client" containerID="cri-o://ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876" gracePeriod=30 Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.620930 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.692396 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpqqm\" (UniqueName: \"kubernetes.io/projected/b79bec21-0f86-4055-b9f6-09e36fca39d7-kube-api-access-gpqqm\") pod \"b79bec21-0f86-4055-b9f6-09e36fca39d7\" (UID: \"b79bec21-0f86-4055-b9f6-09e36fca39d7\") " Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.697117 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:06:52 crc kubenswrapper[4820]: E0221 08:06:52.697478 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.698286 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b79bec21-0f86-4055-b9f6-09e36fca39d7-kube-api-access-gpqqm" (OuterVolumeSpecName: "kube-api-access-gpqqm") pod "b79bec21-0f86-4055-b9f6-09e36fca39d7" (UID: "b79bec21-0f86-4055-b9f6-09e36fca39d7"). InnerVolumeSpecName "kube-api-access-gpqqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.758322 4820 generic.go:334] "Generic (PLEG): container finished" podID="b79bec21-0f86-4055-b9f6-09e36fca39d7" containerID="ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876" exitCode=143 Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.758382 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b79bec21-0f86-4055-b9f6-09e36fca39d7","Type":"ContainerDied","Data":"ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876"} Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.758455 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b79bec21-0f86-4055-b9f6-09e36fca39d7","Type":"ContainerDied","Data":"2c64262dba04eb03748f3b009a919554604fe4b3b0a2b587e5806fe9484db531"} Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.758482 4820 scope.go:117] "RemoveContainer" containerID="ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.758397 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.780073 4820 scope.go:117] "RemoveContainer" containerID="ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876" Feb 21 08:06:52 crc kubenswrapper[4820]: E0221 08:06:52.781651 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876\": container with ID starting with ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876 not found: ID does not exist" containerID="ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.781706 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876"} err="failed to get container status \"ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876\": rpc error: code = NotFound desc = could not find container \"ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876\": container with ID starting with ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876 not found: ID does not exist" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.795576 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpqqm\" (UniqueName: \"kubernetes.io/projected/b79bec21-0f86-4055-b9f6-09e36fca39d7-kube-api-access-gpqqm\") on node \"crc\" DevicePath \"\"" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.797931 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.802798 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:06:53 crc kubenswrapper[4820]: I0221 08:06:53.707176 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79bec21-0f86-4055-b9f6-09e36fca39d7" path="/var/lib/kubelet/pods/b79bec21-0f86-4055-b9f6-09e36fca39d7/volumes" Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.859765 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k7zm4"] Feb 21 08:07:01 crc kubenswrapper[4820]: E0221 08:07:01.860909 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b79bec21-0f86-4055-b9f6-09e36fca39d7" containerName="mariadb-client" Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.860924 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b79bec21-0f86-4055-b9f6-09e36fca39d7" containerName="mariadb-client" Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.861099 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b79bec21-0f86-4055-b9f6-09e36fca39d7" containerName="mariadb-client" Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.862830 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.866088 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k7zm4"] Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.916766 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xmks\" (UniqueName: \"kubernetes.io/projected/d88cd1ee-a295-429d-9e88-133376560585-kube-api-access-2xmks\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.916850 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-utilities\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.916915 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-catalog-content\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.017557 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-catalog-content\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.017627 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xmks\" (UniqueName: \"kubernetes.io/projected/d88cd1ee-a295-429d-9e88-133376560585-kube-api-access-2xmks\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.017673 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-utilities\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.018040 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-catalog-content\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.018082 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-utilities\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.048298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xmks\" (UniqueName: \"kubernetes.io/projected/d88cd1ee-a295-429d-9e88-133376560585-kube-api-access-2xmks\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.203349 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.662006 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k7zm4"] Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.832804 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7zm4" event={"ID":"d88cd1ee-a295-429d-9e88-133376560585","Type":"ContainerStarted","Data":"e00deb2d54d453a338e2ace296a8af912f720f02a775533b7be2f4812bc8f721"} Feb 21 08:07:03 crc kubenswrapper[4820]: I0221 08:07:03.696936 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:07:03 crc kubenswrapper[4820]: E0221 08:07:03.697166 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:07:03 crc kubenswrapper[4820]: I0221 08:07:03.841284 4820 generic.go:334] "Generic (PLEG): container finished" podID="d88cd1ee-a295-429d-9e88-133376560585" containerID="1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85" exitCode=0 Feb 21 08:07:03 crc kubenswrapper[4820]: I0221 08:07:03.841338 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7zm4" event={"ID":"d88cd1ee-a295-429d-9e88-133376560585","Type":"ContainerDied","Data":"1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85"} Feb 21 08:07:04 crc kubenswrapper[4820]: I0221 08:07:04.849978 4820 generic.go:334] "Generic (PLEG): container finished" podID="d88cd1ee-a295-429d-9e88-133376560585" containerID="e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e" exitCode=0 Feb 21 08:07:04 crc kubenswrapper[4820]: I0221 08:07:04.850448 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7zm4" event={"ID":"d88cd1ee-a295-429d-9e88-133376560585","Type":"ContainerDied","Data":"e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e"} Feb 21 08:07:05 crc kubenswrapper[4820]: I0221 08:07:05.859481 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7zm4" event={"ID":"d88cd1ee-a295-429d-9e88-133376560585","Type":"ContainerStarted","Data":"aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908"} Feb 21 08:07:05 crc kubenswrapper[4820]: I0221 08:07:05.879740 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k7zm4" podStartSLOduration=3.4744456980000002 podStartE2EDuration="4.879723928s" podCreationTimestamp="2026-02-21 08:07:01 +0000 UTC" firstStartedPulling="2026-02-21 08:07:03.84351183 +0000 UTC m=+4798.876596028" lastFinishedPulling="2026-02-21 08:07:05.24879007 +0000 UTC m=+4800.281874258" observedRunningTime="2026-02-21 08:07:05.876432268 +0000 UTC m=+4800.909516476" watchObservedRunningTime="2026-02-21 08:07:05.879723928 +0000 UTC m=+4800.912808126" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.652715 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gxqfq"] Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.654545 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.679438 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-catalog-content\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.679708 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-utilities\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.679780 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhr6d\" (UniqueName: \"kubernetes.io/projected/025c1a7f-4a20-4175-b9a4-b21563a944fb-kube-api-access-mhr6d\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.683066 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxqfq"] Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.781077 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-catalog-content\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.781594 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-catalog-content\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.782257 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-utilities\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.782025 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-utilities\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.782319 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhr6d\" (UniqueName: \"kubernetes.io/projected/025c1a7f-4a20-4175-b9a4-b21563a944fb-kube-api-access-mhr6d\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.801382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhr6d\" (UniqueName: \"kubernetes.io/projected/025c1a7f-4a20-4175-b9a4-b21563a944fb-kube-api-access-mhr6d\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:07 crc kubenswrapper[4820]: I0221 08:07:07.002081 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:07 crc kubenswrapper[4820]: I0221 08:07:07.287013 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxqfq"] Feb 21 08:07:07 crc kubenswrapper[4820]: I0221 08:07:07.875669 4820 generic.go:334] "Generic (PLEG): container finished" podID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerID="5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a" exitCode=0 Feb 21 08:07:07 crc kubenswrapper[4820]: I0221 08:07:07.875837 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqfq" event={"ID":"025c1a7f-4a20-4175-b9a4-b21563a944fb","Type":"ContainerDied","Data":"5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a"} Feb 21 08:07:07 crc kubenswrapper[4820]: I0221 08:07:07.875980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqfq" event={"ID":"025c1a7f-4a20-4175-b9a4-b21563a944fb","Type":"ContainerStarted","Data":"f60cef1898ab6a9f09adbdcbca2c7577c703d38c3f92d6706838ae586d0ae809"} Feb 21 08:07:08 crc kubenswrapper[4820]: I0221 08:07:08.887093 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqfq" event={"ID":"025c1a7f-4a20-4175-b9a4-b21563a944fb","Type":"ContainerStarted","Data":"2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961"} Feb 21 08:07:09 crc kubenswrapper[4820]: I0221 08:07:09.896215 4820 generic.go:334] "Generic (PLEG): container finished" podID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerID="2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961" exitCode=0 Feb 21 08:07:09 crc kubenswrapper[4820]: I0221 08:07:09.896284 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqfq" event={"ID":"025c1a7f-4a20-4175-b9a4-b21563a944fb","Type":"ContainerDied","Data":"2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961"} Feb 21 08:07:10 crc kubenswrapper[4820]: I0221 08:07:10.905867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqfq" event={"ID":"025c1a7f-4a20-4175-b9a4-b21563a944fb","Type":"ContainerStarted","Data":"47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb"} Feb 21 08:07:10 crc kubenswrapper[4820]: I0221 08:07:10.932128 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gxqfq" podStartSLOduration=2.527707543 podStartE2EDuration="4.93210583s" podCreationTimestamp="2026-02-21 08:07:06 +0000 UTC" firstStartedPulling="2026-02-21 08:07:07.877435844 +0000 UTC m=+4802.910520042" lastFinishedPulling="2026-02-21 08:07:10.281834131 +0000 UTC m=+4805.314918329" observedRunningTime="2026-02-21 08:07:10.92578413 +0000 UTC m=+4805.958868328" watchObservedRunningTime="2026-02-21 08:07:10.93210583 +0000 UTC m=+4805.965190038" Feb 21 08:07:12 crc kubenswrapper[4820]: I0221 08:07:12.204274 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:12 crc kubenswrapper[4820]: I0221 08:07:12.205229 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:12 crc kubenswrapper[4820]: I0221 08:07:12.240541 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:12 crc kubenswrapper[4820]: I0221 08:07:12.976020 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:13 crc kubenswrapper[4820]: I0221 08:07:13.243710 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k7zm4"] Feb 21 08:07:14 crc kubenswrapper[4820]: I0221 08:07:14.931073 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k7zm4" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="registry-server" containerID="cri-o://aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908" gracePeriod=2 Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.316388 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.410057 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xmks\" (UniqueName: \"kubernetes.io/projected/d88cd1ee-a295-429d-9e88-133376560585-kube-api-access-2xmks\") pod \"d88cd1ee-a295-429d-9e88-133376560585\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.410126 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-utilities\") pod \"d88cd1ee-a295-429d-9e88-133376560585\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.410271 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-catalog-content\") pod \"d88cd1ee-a295-429d-9e88-133376560585\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.411106 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-utilities" (OuterVolumeSpecName: "utilities") pod "d88cd1ee-a295-429d-9e88-133376560585" (UID: "d88cd1ee-a295-429d-9e88-133376560585"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.415259 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88cd1ee-a295-429d-9e88-133376560585-kube-api-access-2xmks" (OuterVolumeSpecName: "kube-api-access-2xmks") pod "d88cd1ee-a295-429d-9e88-133376560585" (UID: "d88cd1ee-a295-429d-9e88-133376560585"). InnerVolumeSpecName "kube-api-access-2xmks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.512557 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xmks\" (UniqueName: \"kubernetes.io/projected/d88cd1ee-a295-429d-9e88-133376560585-kube-api-access-2xmks\") on node \"crc\" DevicePath \"\"" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.512593 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.868278 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d88cd1ee-a295-429d-9e88-133376560585" (UID: "d88cd1ee-a295-429d-9e88-133376560585"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.919040 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.939552 4820 generic.go:334] "Generic (PLEG): container finished" podID="d88cd1ee-a295-429d-9e88-133376560585" containerID="aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908" exitCode=0 Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.939601 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7zm4" event={"ID":"d88cd1ee-a295-429d-9e88-133376560585","Type":"ContainerDied","Data":"aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908"} Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.939632 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7zm4" event={"ID":"d88cd1ee-a295-429d-9e88-133376560585","Type":"ContainerDied","Data":"e00deb2d54d453a338e2ace296a8af912f720f02a775533b7be2f4812bc8f721"} Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.939650 4820 scope.go:117] "RemoveContainer" containerID="aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.939693 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.960349 4820 scope.go:117] "RemoveContainer" containerID="e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.975361 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k7zm4"] Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.980855 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k7zm4"] Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.985519 4820 scope.go:117] "RemoveContainer" containerID="1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85" Feb 21 08:07:16 crc kubenswrapper[4820]: I0221 08:07:16.009666 4820 scope.go:117] "RemoveContainer" containerID="aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908" Feb 21 08:07:16 crc kubenswrapper[4820]: E0221 08:07:16.010103 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908\": container with ID starting with aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908 not found: ID does not exist" containerID="aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908" Feb 21 08:07:16 crc kubenswrapper[4820]: I0221 08:07:16.010134 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908"} err="failed to get container status \"aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908\": rpc error: code = NotFound desc = could not find container \"aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908\": container with ID starting with aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908 not found: ID does not exist" Feb 21 08:07:16 crc kubenswrapper[4820]: I0221 08:07:16.010157 4820 scope.go:117] "RemoveContainer" containerID="e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e" Feb 21 08:07:16 crc kubenswrapper[4820]: E0221 08:07:16.010621 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e\": container with ID starting with e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e not found: ID does not exist" containerID="e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e" Feb 21 08:07:16 crc kubenswrapper[4820]: I0221 08:07:16.010660 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e"} err="failed to get container status \"e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e\": rpc error: code = NotFound desc = could not find container \"e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e\": container with ID starting with e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e not found: ID does not exist" Feb 21 08:07:16 crc kubenswrapper[4820]: I0221 08:07:16.010685 4820 scope.go:117] "RemoveContainer" containerID="1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85" Feb 21 08:07:16 crc kubenswrapper[4820]: E0221 08:07:16.011001 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85\": container with ID starting with 1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85 not found: ID does not exist" containerID="1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85" Feb 21 08:07:16 crc kubenswrapper[4820]: I0221 08:07:16.011027 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85"} err="failed to get container status \"1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85\": rpc error: code = NotFound desc = could not find container \"1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85\": container with ID starting with 1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85 not found: ID does not exist" Feb 21 08:07:17 crc kubenswrapper[4820]: I0221 08:07:17.003004 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:17 crc kubenswrapper[4820]: I0221 08:07:17.003050 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:17 crc kubenswrapper[4820]: I0221 08:07:17.043389 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:17 crc kubenswrapper[4820]: I0221 08:07:17.716918 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88cd1ee-a295-429d-9e88-133376560585" path="/var/lib/kubelet/pods/d88cd1ee-a295-429d-9e88-133376560585/volumes" Feb 21 08:07:17 crc kubenswrapper[4820]: I0221 08:07:17.997580 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:18 crc kubenswrapper[4820]: I0221 08:07:18.643222 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxqfq"] Feb 21 08:07:18 crc kubenswrapper[4820]: I0221 08:07:18.696555 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:07:18 crc kubenswrapper[4820]: E0221 08:07:18.696815 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:07:19 crc kubenswrapper[4820]: I0221 08:07:19.965479 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gxqfq" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="registry-server" containerID="cri-o://47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb" gracePeriod=2 Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.870409 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.897218 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-utilities\") pod \"025c1a7f-4a20-4175-b9a4-b21563a944fb\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.897348 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-catalog-content\") pod \"025c1a7f-4a20-4175-b9a4-b21563a944fb\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.897415 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhr6d\" (UniqueName: \"kubernetes.io/projected/025c1a7f-4a20-4175-b9a4-b21563a944fb-kube-api-access-mhr6d\") pod \"025c1a7f-4a20-4175-b9a4-b21563a944fb\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.898266 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-utilities" (OuterVolumeSpecName: "utilities") pod "025c1a7f-4a20-4175-b9a4-b21563a944fb" (UID: "025c1a7f-4a20-4175-b9a4-b21563a944fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.904065 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025c1a7f-4a20-4175-b9a4-b21563a944fb-kube-api-access-mhr6d" (OuterVolumeSpecName: "kube-api-access-mhr6d") pod "025c1a7f-4a20-4175-b9a4-b21563a944fb" (UID: "025c1a7f-4a20-4175-b9a4-b21563a944fb"). InnerVolumeSpecName "kube-api-access-mhr6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.954581 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "025c1a7f-4a20-4175-b9a4-b21563a944fb" (UID: "025c1a7f-4a20-4175-b9a4-b21563a944fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.975643 4820 generic.go:334] "Generic (PLEG): container finished" podID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerID="47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb" exitCode=0 Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.975696 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqfq" event={"ID":"025c1a7f-4a20-4175-b9a4-b21563a944fb","Type":"ContainerDied","Data":"47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb"} Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.975716 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.975735 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqfq" event={"ID":"025c1a7f-4a20-4175-b9a4-b21563a944fb","Type":"ContainerDied","Data":"f60cef1898ab6a9f09adbdcbca2c7577c703d38c3f92d6706838ae586d0ae809"} Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.975755 4820 scope.go:117] "RemoveContainer" containerID="47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb" Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.996909 4820 scope.go:117] "RemoveContainer" containerID="2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.004890 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.004934 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.004985 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhr6d\" (UniqueName: \"kubernetes.io/projected/025c1a7f-4a20-4175-b9a4-b21563a944fb-kube-api-access-mhr6d\") on node \"crc\" DevicePath \"\"" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.016918 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxqfq"] Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.024266 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gxqfq"] Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.038445 4820 scope.go:117] "RemoveContainer" containerID="5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.052795 4820 scope.go:117] "RemoveContainer" containerID="47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb" Feb 21 08:07:21 crc kubenswrapper[4820]: E0221 08:07:21.053177 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb\": container with ID starting with 47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb not found: ID does not exist" containerID="47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.053261 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb"} err="failed to get container status \"47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb\": rpc error: code = NotFound desc = could not find container \"47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb\": container with ID starting with 47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb not found: ID does not exist" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.053311 4820 scope.go:117] "RemoveContainer" containerID="2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961" Feb 21 08:07:21 crc kubenswrapper[4820]: E0221 08:07:21.053702 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961\": container with ID starting with 2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961 not found: ID does not exist" containerID="2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.053740 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961"} err="failed to get container status \"2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961\": rpc error: code = NotFound desc = could not find container \"2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961\": container with ID starting with 2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961 not found: ID does not exist" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.053768 4820 scope.go:117] "RemoveContainer" containerID="5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a" Feb 21 08:07:21 crc kubenswrapper[4820]: E0221 08:07:21.054138 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a\": container with ID starting with 5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a not found: ID does not exist" containerID="5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.054221 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a"} err="failed to get container status \"5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a\": rpc error: code = NotFound desc = could not find container \"5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a\": container with ID starting with 5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a not found: ID does not exist" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.705571 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" path="/var/lib/kubelet/pods/025c1a7f-4a20-4175-b9a4-b21563a944fb/volumes" Feb 21 08:07:32 crc kubenswrapper[4820]: I0221 08:07:32.696795 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:07:32 crc kubenswrapper[4820]: E0221 08:07:32.697483 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:07:47 crc kubenswrapper[4820]: I0221 08:07:47.697043 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:07:47 crc kubenswrapper[4820]: E0221 08:07:47.697846 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:08:00 crc kubenswrapper[4820]: I0221 08:08:00.696207 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:08:00 crc kubenswrapper[4820]: E0221 08:08:00.697043 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:08:14 crc kubenswrapper[4820]: I0221 08:08:14.698017 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:08:15 crc kubenswrapper[4820]: I0221 08:08:15.414119 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"8b576e514e31a08e28f68fa4c688b72455bc5c0da6c05b78101822bef0984897"} Feb 21 08:08:18 crc kubenswrapper[4820]: I0221 08:08:18.147542 4820 scope.go:117] "RemoveContainer" containerID="abe2f0576407f2db8b453915dbe4741e83b046e73205be5c6cbf759bce72a106" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.853081 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Feb 21 08:09:51 crc kubenswrapper[4820]: E0221 08:09:51.854056 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="registry-server" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854075 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="registry-server" Feb 21 08:09:51 crc kubenswrapper[4820]: E0221 08:09:51.854093 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="extract-utilities" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854101 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="extract-utilities" Feb 21 08:09:51 crc kubenswrapper[4820]: E0221 08:09:51.854120 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="registry-server" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854127 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="registry-server" Feb 21 08:09:51 crc kubenswrapper[4820]: E0221 08:09:51.854141 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="extract-content" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854147 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="extract-content" Feb 21 08:09:51 crc kubenswrapper[4820]: E0221 08:09:51.854164 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="extract-utilities" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854172 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="extract-utilities" Feb 21 08:09:51 crc kubenswrapper[4820]: E0221 08:09:51.854189 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="extract-content" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854197 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="extract-content" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854426 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="registry-server" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854454 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="registry-server" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854909 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.857547 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ccs7x" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.883112 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.970199 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gczmn\" (UniqueName: \"kubernetes.io/projected/f51c53b3-c766-40db-ad65-5935f9fb3ee4-kube-api-access-gczmn\") pod \"mariadb-copy-data\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " pod="openstack/mariadb-copy-data" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.970542 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\") pod \"mariadb-copy-data\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " pod="openstack/mariadb-copy-data" Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.072024 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\") pod \"mariadb-copy-data\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " pod="openstack/mariadb-copy-data" Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.072093 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gczmn\" (UniqueName: \"kubernetes.io/projected/f51c53b3-c766-40db-ad65-5935f9fb3ee4-kube-api-access-gczmn\") pod \"mariadb-copy-data\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " pod="openstack/mariadb-copy-data" Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.074791 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.074824 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\") pod \"mariadb-copy-data\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/de5af2121fd419b811e8abb08129b759e3658785e5d5b3364ba51c94bc9f7907/globalmount\"" pod="openstack/mariadb-copy-data" Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.093161 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gczmn\" (UniqueName: \"kubernetes.io/projected/f51c53b3-c766-40db-ad65-5935f9fb3ee4-kube-api-access-gczmn\") pod \"mariadb-copy-data\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " pod="openstack/mariadb-copy-data" Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.097512 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\") pod \"mariadb-copy-data\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " pod="openstack/mariadb-copy-data" Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.177303 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.655226 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Feb 21 08:09:53 crc kubenswrapper[4820]: I0221 08:09:53.104113 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"f51c53b3-c766-40db-ad65-5935f9fb3ee4","Type":"ContainerStarted","Data":"9265ce156d963015ed9d0dc964122ef5cf17eb7532d8a20b6b597df27cc4af49"} Feb 21 08:09:53 crc kubenswrapper[4820]: I0221 08:09:53.104179 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"f51c53b3-c766-40db-ad65-5935f9fb3ee4","Type":"ContainerStarted","Data":"707475d0c6275ed4702ec4fee55d65d5c005c4843fb7b9c91608c48f928cd4c6"} Feb 21 08:09:53 crc kubenswrapper[4820]: I0221 08:09:53.130366 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.130347566 podStartE2EDuration="3.130347566s" podCreationTimestamp="2026-02-21 08:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:09:53.122622206 +0000 UTC m=+4968.155706424" watchObservedRunningTime="2026-02-21 08:09:53.130347566 +0000 UTC m=+4968.163431764" Feb 21 08:09:55 crc kubenswrapper[4820]: I0221 08:09:55.616589 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:55 crc kubenswrapper[4820]: I0221 08:09:55.618012 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:09:55 crc kubenswrapper[4820]: I0221 08:09:55.622543 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:55 crc kubenswrapper[4820]: I0221 08:09:55.740355 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcsqh\" (UniqueName: \"kubernetes.io/projected/db70ca85-292a-47ed-9028-c23b0e963849-kube-api-access-rcsqh\") pod \"mariadb-client\" (UID: \"db70ca85-292a-47ed-9028-c23b0e963849\") " pod="openstack/mariadb-client" Feb 21 08:09:55 crc kubenswrapper[4820]: I0221 08:09:55.842399 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcsqh\" (UniqueName: \"kubernetes.io/projected/db70ca85-292a-47ed-9028-c23b0e963849-kube-api-access-rcsqh\") pod \"mariadb-client\" (UID: \"db70ca85-292a-47ed-9028-c23b0e963849\") " pod="openstack/mariadb-client" Feb 21 08:09:55 crc kubenswrapper[4820]: I0221 08:09:55.858645 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcsqh\" (UniqueName: \"kubernetes.io/projected/db70ca85-292a-47ed-9028-c23b0e963849-kube-api-access-rcsqh\") pod \"mariadb-client\" (UID: \"db70ca85-292a-47ed-9028-c23b0e963849\") " pod="openstack/mariadb-client" Feb 21 08:09:55 crc kubenswrapper[4820]: I0221 08:09:55.942391 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:09:56 crc kubenswrapper[4820]: I0221 08:09:56.371429 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:57 crc kubenswrapper[4820]: I0221 08:09:57.129710 4820 generic.go:334] "Generic (PLEG): container finished" podID="db70ca85-292a-47ed-9028-c23b0e963849" containerID="7db30347c12dd2be7f43e71cdb85bf1d17d0f2f0e04cb11cd9773d0e72d380c5" exitCode=0 Feb 21 08:09:57 crc kubenswrapper[4820]: I0221 08:09:57.129774 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"db70ca85-292a-47ed-9028-c23b0e963849","Type":"ContainerDied","Data":"7db30347c12dd2be7f43e71cdb85bf1d17d0f2f0e04cb11cd9773d0e72d380c5"} Feb 21 08:09:57 crc kubenswrapper[4820]: I0221 08:09:57.130369 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"db70ca85-292a-47ed-9028-c23b0e963849","Type":"ContainerStarted","Data":"3ce6c6e87eb577c68cb66308708ff0ba300ad70780045e921ab8ece0b7911121"} Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.426120 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.470781 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_db70ca85-292a-47ed-9028-c23b0e963849/mariadb-client/0.log" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.495512 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.500861 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.589725 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcsqh\" (UniqueName: \"kubernetes.io/projected/db70ca85-292a-47ed-9028-c23b0e963849-kube-api-access-rcsqh\") pod \"db70ca85-292a-47ed-9028-c23b0e963849\" (UID: \"db70ca85-292a-47ed-9028-c23b0e963849\") " Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.596585 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db70ca85-292a-47ed-9028-c23b0e963849-kube-api-access-rcsqh" (OuterVolumeSpecName: "kube-api-access-rcsqh") pod "db70ca85-292a-47ed-9028-c23b0e963849" (UID: "db70ca85-292a-47ed-9028-c23b0e963849"). InnerVolumeSpecName "kube-api-access-rcsqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.616867 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:58 crc kubenswrapper[4820]: E0221 08:09:58.617136 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db70ca85-292a-47ed-9028-c23b0e963849" containerName="mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.617152 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="db70ca85-292a-47ed-9028-c23b0e963849" containerName="mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.617344 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="db70ca85-292a-47ed-9028-c23b0e963849" containerName="mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.617807 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.625808 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.691222 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llfk4\" (UniqueName: \"kubernetes.io/projected/4ffe630a-95af-4704-b580-f934102c5c4f-kube-api-access-llfk4\") pod \"mariadb-client\" (UID: \"4ffe630a-95af-4704-b580-f934102c5c4f\") " pod="openstack/mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.691432 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcsqh\" (UniqueName: \"kubernetes.io/projected/db70ca85-292a-47ed-9028-c23b0e963849-kube-api-access-rcsqh\") on node \"crc\" DevicePath \"\"" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.792433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llfk4\" (UniqueName: \"kubernetes.io/projected/4ffe630a-95af-4704-b580-f934102c5c4f-kube-api-access-llfk4\") pod \"mariadb-client\" (UID: \"4ffe630a-95af-4704-b580-f934102c5c4f\") " pod="openstack/mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.807954 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llfk4\" (UniqueName: \"kubernetes.io/projected/4ffe630a-95af-4704-b580-f934102c5c4f-kube-api-access-llfk4\") pod \"mariadb-client\" (UID: \"4ffe630a-95af-4704-b580-f934102c5c4f\") " pod="openstack/mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.950183 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:09:59 crc kubenswrapper[4820]: I0221 08:09:59.144185 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ce6c6e87eb577c68cb66308708ff0ba300ad70780045e921ab8ece0b7911121" Feb 21 08:09:59 crc kubenswrapper[4820]: I0221 08:09:59.144272 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:09:59 crc kubenswrapper[4820]: I0221 08:09:59.160355 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="db70ca85-292a-47ed-9028-c23b0e963849" podUID="4ffe630a-95af-4704-b580-f934102c5c4f" Feb 21 08:09:59 crc kubenswrapper[4820]: I0221 08:09:59.340807 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:59 crc kubenswrapper[4820]: W0221 08:09:59.343345 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ffe630a_95af_4704_b580_f934102c5c4f.slice/crio-85b70e40adc79fe00b1741b01b854f7c3dcd0e56f4327d2862bfe4933035daa1 WatchSource:0}: Error finding container 85b70e40adc79fe00b1741b01b854f7c3dcd0e56f4327d2862bfe4933035daa1: Status 404 returned error can't find the container with id 85b70e40adc79fe00b1741b01b854f7c3dcd0e56f4327d2862bfe4933035daa1 Feb 21 08:09:59 crc kubenswrapper[4820]: I0221 08:09:59.705337 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db70ca85-292a-47ed-9028-c23b0e963849" path="/var/lib/kubelet/pods/db70ca85-292a-47ed-9028-c23b0e963849/volumes" Feb 21 08:10:00 crc kubenswrapper[4820]: I0221 08:10:00.153690 4820 generic.go:334] "Generic (PLEG): container finished" podID="4ffe630a-95af-4704-b580-f934102c5c4f" containerID="8ac70bfcb050a56388f6c954dd0e7f7f12588edc0c501a75ff209759990a1035" exitCode=0 Feb 21 08:10:00 crc kubenswrapper[4820]: I0221 08:10:00.153740 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"4ffe630a-95af-4704-b580-f934102c5c4f","Type":"ContainerDied","Data":"8ac70bfcb050a56388f6c954dd0e7f7f12588edc0c501a75ff209759990a1035"} Feb 21 08:10:00 crc kubenswrapper[4820]: I0221 08:10:00.153766 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"4ffe630a-95af-4704-b580-f934102c5c4f","Type":"ContainerStarted","Data":"85b70e40adc79fe00b1741b01b854f7c3dcd0e56f4327d2862bfe4933035daa1"} Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.521290 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.539561 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_4ffe630a-95af-4704-b580-f934102c5c4f/mariadb-client/0.log" Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.566775 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.574980 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.641540 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llfk4\" (UniqueName: \"kubernetes.io/projected/4ffe630a-95af-4704-b580-f934102c5c4f-kube-api-access-llfk4\") pod \"4ffe630a-95af-4704-b580-f934102c5c4f\" (UID: \"4ffe630a-95af-4704-b580-f934102c5c4f\") " Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.647925 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ffe630a-95af-4704-b580-f934102c5c4f-kube-api-access-llfk4" (OuterVolumeSpecName: "kube-api-access-llfk4") pod "4ffe630a-95af-4704-b580-f934102c5c4f" (UID: "4ffe630a-95af-4704-b580-f934102c5c4f"). InnerVolumeSpecName "kube-api-access-llfk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.718762 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ffe630a-95af-4704-b580-f934102c5c4f" path="/var/lib/kubelet/pods/4ffe630a-95af-4704-b580-f934102c5c4f/volumes" Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.744090 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llfk4\" (UniqueName: \"kubernetes.io/projected/4ffe630a-95af-4704-b580-f934102c5c4f-kube-api-access-llfk4\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:02 crc kubenswrapper[4820]: I0221 08:10:02.170005 4820 scope.go:117] "RemoveContainer" containerID="8ac70bfcb050a56388f6c954dd0e7f7f12588edc0c501a75ff209759990a1035" Feb 21 08:10:02 crc kubenswrapper[4820]: I0221 08:10:02.170085 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.420099 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 08:10:30 crc kubenswrapper[4820]: E0221 08:10:30.421074 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffe630a-95af-4704-b580-f934102c5c4f" containerName="mariadb-client" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.421089 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffe630a-95af-4704-b580-f934102c5c4f" containerName="mariadb-client" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.421252 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ffe630a-95af-4704-b580-f934102c5c4f" containerName="mariadb-client" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.421996 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.424699 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-gxlth" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.425056 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.425230 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.425603 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.432019 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.450844 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.452484 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.457972 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.466295 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.467509 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.504299 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.520638 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527375 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0292096a-9b13-475a-971c-cf4dae1a3f8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527439 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527463 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26cpl\" (UniqueName: \"kubernetes.io/projected/0292096a-9b13-475a-971c-cf4dae1a3f8f-kube-api-access-26cpl\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527498 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527531 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527572 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0292096a-9b13-475a-971c-cf4dae1a3f8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527611 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527671 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0292096a-9b13-475a-971c-cf4dae1a3f8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.632052 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c7377f38-4907-4b1d-a339-f274c122ef5c-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.632123 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633406 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-config\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633452 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633496 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633533 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fwtl\" (UniqueName: \"kubernetes.io/projected/c7377f38-4907-4b1d-a339-f274c122ef5c-kube-api-access-9fwtl\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0292096a-9b13-475a-971c-cf4dae1a3f8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633581 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633601 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633622 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7377f38-4907-4b1d-a339-f274c122ef5c-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633639 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7377f38-4907-4b1d-a339-f274c122ef5c-config\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633681 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0292096a-9b13-475a-971c-cf4dae1a3f8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633705 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633731 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633756 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633779 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26cpl\" (UniqueName: \"kubernetes.io/projected/0292096a-9b13-475a-971c-cf4dae1a3f8f-kube-api-access-26cpl\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633816 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633838 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfrwd\" (UniqueName: \"kubernetes.io/projected/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-kube-api-access-jfrwd\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633862 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9f9ac68d-da58-46ef-8c95-25977043006f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f9ac68d-da58-46ef-8c95-25977043006f\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633881 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633909 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633946 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.634004 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0292096a-9b13-475a-971c-cf4dae1a3f8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.634034 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.635171 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0292096a-9b13-475a-971c-cf4dae1a3f8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.636150 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0292096a-9b13-475a-971c-cf4dae1a3f8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.637213 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0292096a-9b13-475a-971c-cf4dae1a3f8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.644319 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.644973 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.645008 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d7f5a91cf66baa124de39703812a65cbead766845674401f574f4477cbb5ca47/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.655994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.660728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.668900 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26cpl\" (UniqueName: \"kubernetes.io/projected/0292096a-9b13-475a-971c-cf4dae1a3f8f-kube-api-access-26cpl\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.686957 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735575 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735654 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fwtl\" (UniqueName: \"kubernetes.io/projected/c7377f38-4907-4b1d-a339-f274c122ef5c-kube-api-access-9fwtl\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735681 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735704 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735724 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7377f38-4907-4b1d-a339-f274c122ef5c-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735745 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7377f38-4907-4b1d-a339-f274c122ef5c-config\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735792 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735849 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735882 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfrwd\" (UniqueName: \"kubernetes.io/projected/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-kube-api-access-jfrwd\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735907 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9f9ac68d-da58-46ef-8c95-25977043006f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f9ac68d-da58-46ef-8c95-25977043006f\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735929 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.736028 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.736070 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c7377f38-4907-4b1d-a339-f274c122ef5c-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.736148 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-config\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.736178 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.736746 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.738157 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.738594 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-config\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.738674 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.738700 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/17960b22741a80753ff36376b4cd4e9eaaca50bec0a188a2737fcad06b3ddbc4/globalmount\"" pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.738985 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.739006 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9f9ac68d-da58-46ef-8c95-25977043006f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f9ac68d-da58-46ef-8c95-25977043006f\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d716636993c182d97458935b224f5a7dc8e62f8801baf3c286a46a0042ece6e3/globalmount\"" pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.739009 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7377f38-4907-4b1d-a339-f274c122ef5c-config\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.739944 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.740285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.747594 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.750987 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7377f38-4907-4b1d-a339-f274c122ef5c-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.753612 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c7377f38-4907-4b1d-a339-f274c122ef5c-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.759016 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.759542 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.759623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.761074 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfrwd\" (UniqueName: \"kubernetes.io/projected/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-kube-api-access-jfrwd\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.763226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fwtl\" (UniqueName: \"kubernetes.io/projected/c7377f38-4907-4b1d-a339-f274c122ef5c-kube-api-access-9fwtl\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.780302 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9f9ac68d-da58-46ef-8c95-25977043006f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f9ac68d-da58-46ef-8c95-25977043006f\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.782450 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.789609 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.821106 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.838152 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:31 crc kubenswrapper[4820]: I0221 08:10:31.323206 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 08:10:31 crc kubenswrapper[4820]: I0221 08:10:31.375468 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0292096a-9b13-475a-971c-cf4dae1a3f8f","Type":"ContainerStarted","Data":"26fc385354a329c447d5c1a9398ba80883737fc2bdac13520c6ff94a124a2852"} Feb 21 08:10:31 crc kubenswrapper[4820]: I0221 08:10:31.421341 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 21 08:10:31 crc kubenswrapper[4820]: W0221 08:10:31.424971 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7377f38_4907_4b1d_a339_f274c122ef5c.slice/crio-27d636d249b234d2b48bb73ca243a6ba6d2e98c57f91869e40be3d34a5e0d724 WatchSource:0}: Error finding container 27d636d249b234d2b48bb73ca243a6ba6d2e98c57f91869e40be3d34a5e0d724: Status 404 returned error can't find the container with id 27d636d249b234d2b48bb73ca243a6ba6d2e98c57f91869e40be3d34a5e0d724 Feb 21 08:10:31 crc kubenswrapper[4820]: I0221 08:10:31.967078 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 21 08:10:31 crc kubenswrapper[4820]: W0221 08:10:31.978936 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ff1c87b_f0e7_4917_a5ce_291ff2b6bd37.slice/crio-ba87bc9307549432591bfb0ec24a9b9fd35ac41d5f4d37bbdd4ea03186cb1e36 WatchSource:0}: Error finding container ba87bc9307549432591bfb0ec24a9b9fd35ac41d5f4d37bbdd4ea03186cb1e36: Status 404 returned error can't find the container with id ba87bc9307549432591bfb0ec24a9b9fd35ac41d5f4d37bbdd4ea03186cb1e36 Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.393696 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37","Type":"ContainerStarted","Data":"ba87bc9307549432591bfb0ec24a9b9fd35ac41d5f4d37bbdd4ea03186cb1e36"} Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.395942 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c7377f38-4907-4b1d-a339-f274c122ef5c","Type":"ContainerStarted","Data":"27d636d249b234d2b48bb73ca243a6ba6d2e98c57f91869e40be3d34a5e0d724"} Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.673068 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.684035 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.687199 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.690380 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.692340 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.695217 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.695334 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-g2nsk" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.711585 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.715712 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.724275 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.725889 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.731780 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.744774 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777683 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a6723b-ff49-4d22-a6cd-1e9509165729-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777746 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9df59340-c38b-4498-98e8-cfb1627595fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9df59340-c38b-4498-98e8-cfb1627595fd\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777859 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777881 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777909 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6aaa256c-7102-4960-ade0-b903b29b2716-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777930 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777953 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aaa256c-7102-4960-ade0-b903b29b2716-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777969 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777989 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq8vc\" (UniqueName: \"kubernetes.io/projected/66a6723b-ff49-4d22-a6cd-1e9509165729-kube-api-access-fq8vc\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778003 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778016 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tspnf\" (UniqueName: \"kubernetes.io/projected/6aaa256c-7102-4960-ade0-b903b29b2716-kube-api-access-tspnf\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778035 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/66a6723b-ff49-4d22-a6cd-1e9509165729-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778217 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778281 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dcf6ab13-da71-49ec-b2dc-27602f1a953f-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778311 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcf6ab13-da71-49ec-b2dc-27602f1a953f-config\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778351 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aaa256c-7102-4960-ade0-b903b29b2716-config\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778376 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778407 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778448 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778502 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcf6ab13-da71-49ec-b2dc-27602f1a953f-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778540 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rbjp\" (UniqueName: \"kubernetes.io/projected/dcf6ab13-da71-49ec-b2dc-27602f1a953f-kube-api-access-4rbjp\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778567 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a6723b-ff49-4d22-a6cd-1e9509165729-config\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879656 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879714 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcf6ab13-da71-49ec-b2dc-27602f1a953f-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rbjp\" (UniqueName: \"kubernetes.io/projected/dcf6ab13-da71-49ec-b2dc-27602f1a953f-kube-api-access-4rbjp\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879765 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a6723b-ff49-4d22-a6cd-1e9509165729-config\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879809 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a6723b-ff49-4d22-a6cd-1e9509165729-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879856 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879882 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9df59340-c38b-4498-98e8-cfb1627595fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9df59340-c38b-4498-98e8-cfb1627595fd\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879913 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879929 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6aaa256c-7102-4960-ade0-b903b29b2716-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879979 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880005 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aaa256c-7102-4960-ade0-b903b29b2716-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880076 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq8vc\" (UniqueName: \"kubernetes.io/projected/66a6723b-ff49-4d22-a6cd-1e9509165729-kube-api-access-fq8vc\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880096 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880112 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tspnf\" (UniqueName: \"kubernetes.io/projected/6aaa256c-7102-4960-ade0-b903b29b2716-kube-api-access-tspnf\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880135 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/66a6723b-ff49-4d22-a6cd-1e9509165729-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880167 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880192 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dcf6ab13-da71-49ec-b2dc-27602f1a953f-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880212 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcf6ab13-da71-49ec-b2dc-27602f1a953f-config\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880226 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aaa256c-7102-4960-ade0-b903b29b2716-config\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880263 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880291 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880314 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880988 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcf6ab13-da71-49ec-b2dc-27602f1a953f-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.881096 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a6723b-ff49-4d22-a6cd-1e9509165729-config\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.881493 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/66a6723b-ff49-4d22-a6cd-1e9509165729-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.882005 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a6723b-ff49-4d22-a6cd-1e9509165729-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.883554 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6aaa256c-7102-4960-ade0-b903b29b2716-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.883974 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcf6ab13-da71-49ec-b2dc-27602f1a953f-config\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.884401 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dcf6ab13-da71-49ec-b2dc-27602f1a953f-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.887445 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.887791 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.888105 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.888431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.888485 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aaa256c-7102-4960-ade0-b903b29b2716-config\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.888585 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.888806 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.888835 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9d6601ff4899818f7c8d8ba59347d6f94eb8d73b6c448990ea8ab0ab7adb5c28/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.890054 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.890061 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.891300 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.891338 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9df59340-c38b-4498-98e8-cfb1627595fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9df59340-c38b-4498-98e8-cfb1627595fd\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/963b89af96c0185a8f1cccdb2c155d83efe3a56eed47acaa45859e65a7377fb3/globalmount\"" pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.892559 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aaa256c-7102-4960-ade0-b903b29b2716-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.893276 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.895716 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.895761 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/24806ec7245ec04bce8cf628211bfb0c56c08782cce785ca4ab86ba4e6fee2a6/globalmount\"" pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.897507 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rbjp\" (UniqueName: \"kubernetes.io/projected/dcf6ab13-da71-49ec-b2dc-27602f1a953f-kube-api-access-4rbjp\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.899623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.904416 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq8vc\" (UniqueName: \"kubernetes.io/projected/66a6723b-ff49-4d22-a6cd-1e9509165729-kube-api-access-fq8vc\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.911218 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tspnf\" (UniqueName: \"kubernetes.io/projected/6aaa256c-7102-4960-ade0-b903b29b2716-kube-api-access-tspnf\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.928574 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.939654 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.956164 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9df59340-c38b-4498-98e8-cfb1627595fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9df59340-c38b-4498-98e8-cfb1627595fd\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:33 crc kubenswrapper[4820]: I0221 08:10:33.030154 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:33 crc kubenswrapper[4820]: I0221 08:10:33.045374 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:33 crc kubenswrapper[4820]: I0221 08:10:33.057225 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:33 crc kubenswrapper[4820]: I0221 08:10:33.724985 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 21 08:10:34 crc kubenswrapper[4820]: I0221 08:10:34.152800 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 21 08:10:34 crc kubenswrapper[4820]: I0221 08:10:34.630220 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 08:10:35 crc kubenswrapper[4820]: W0221 08:10:35.207476 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6aaa256c_7102_4960_ade0_b903b29b2716.slice/crio-688d17db2c34f907e1d9ff58255ca280d3444c892a70461d042d88f70e92a956 WatchSource:0}: Error finding container 688d17db2c34f907e1d9ff58255ca280d3444c892a70461d042d88f70e92a956: Status 404 returned error can't find the container with id 688d17db2c34f907e1d9ff58255ca280d3444c892a70461d042d88f70e92a956 Feb 21 08:10:35 crc kubenswrapper[4820]: W0221 08:10:35.212799 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66a6723b_ff49_4d22_a6cd_1e9509165729.slice/crio-7bf47c3fcde87706eebe4eefe0ed5aea24dd8116cc3f18f0a2c2017e24c53082 WatchSource:0}: Error finding container 7bf47c3fcde87706eebe4eefe0ed5aea24dd8116cc3f18f0a2c2017e24c53082: Status 404 returned error can't find the container with id 7bf47c3fcde87706eebe4eefe0ed5aea24dd8116cc3f18f0a2c2017e24c53082 Feb 21 08:10:35 crc kubenswrapper[4820]: I0221 08:10:35.416126 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"66a6723b-ff49-4d22-a6cd-1e9509165729","Type":"ContainerStarted","Data":"7bf47c3fcde87706eebe4eefe0ed5aea24dd8116cc3f18f0a2c2017e24c53082"} Feb 21 08:10:35 crc kubenswrapper[4820]: I0221 08:10:35.417351 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"dcf6ab13-da71-49ec-b2dc-27602f1a953f","Type":"ContainerStarted","Data":"16f4889cd26e8affb691bc0a5c22d09707ede4ed9b34c5628c2c3b7fbed1a752"} Feb 21 08:10:35 crc kubenswrapper[4820]: I0221 08:10:35.418306 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6aaa256c-7102-4960-ade0-b903b29b2716","Type":"ContainerStarted","Data":"688d17db2c34f907e1d9ff58255ca280d3444c892a70461d042d88f70e92a956"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.431049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c7377f38-4907-4b1d-a339-f274c122ef5c","Type":"ContainerStarted","Data":"39546c24134e65b25010dd854838cabe029c35d7db634514ad69460ec908ef36"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.431433 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c7377f38-4907-4b1d-a339-f274c122ef5c","Type":"ContainerStarted","Data":"f3039c9e08abc5ed462ca2195269e3257621d8be1483eecd52c05059d075ed73"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.435616 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"66a6723b-ff49-4d22-a6cd-1e9509165729","Type":"ContainerStarted","Data":"cbeda9872c6653a713f344b7ab3c51e36f8001b85afa7b7c7864bd68cd59377d"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.437938 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0292096a-9b13-475a-971c-cf4dae1a3f8f","Type":"ContainerStarted","Data":"449db6b634a25506f0e516a9f39bd0b0c359299187d2f1dd4aedc0fb9b5dd721"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.437988 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0292096a-9b13-475a-971c-cf4dae1a3f8f","Type":"ContainerStarted","Data":"18f1601d2133b0f49ae8ca812831181b495ccb6ae74c12b6d27fae63ed7e5425"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.439875 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"dcf6ab13-da71-49ec-b2dc-27602f1a953f","Type":"ContainerStarted","Data":"9b17ebb5989ecd7d75732d0cb6d13c172799a8fe0010291d8003987d51c7a19e"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.441752 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37","Type":"ContainerStarted","Data":"08b3d21c8cc1b6778a5d54db6251c8627fe07b5ee828ebc5ba3d5dfde3538ef9"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.441783 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37","Type":"ContainerStarted","Data":"77a70d21a868d8e957b8342988a314baf00c861f744a8676bb68fc451deb303a"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.446082 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6aaa256c-7102-4960-ade0-b903b29b2716","Type":"ContainerStarted","Data":"23d76b3087435cd4d210ee2a42b85e548a2cba9f25800a8e4bb6ea4b93a38ca0"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.466927 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.417055103 podStartE2EDuration="7.46690681s" podCreationTimestamp="2026-02-21 08:10:29 +0000 UTC" firstStartedPulling="2026-02-21 08:10:31.427642824 +0000 UTC m=+5006.460727022" lastFinishedPulling="2026-02-21 08:10:35.477494531 +0000 UTC m=+5010.510578729" observedRunningTime="2026-02-21 08:10:36.462664026 +0000 UTC m=+5011.495748234" watchObservedRunningTime="2026-02-21 08:10:36.46690681 +0000 UTC m=+5011.499991008" Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.488736 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.001065346 podStartE2EDuration="7.488719671s" podCreationTimestamp="2026-02-21 08:10:29 +0000 UTC" firstStartedPulling="2026-02-21 08:10:31.980567765 +0000 UTC m=+5007.013651963" lastFinishedPulling="2026-02-21 08:10:35.46822208 +0000 UTC m=+5010.501306288" observedRunningTime="2026-02-21 08:10:36.483906741 +0000 UTC m=+5011.516990939" watchObservedRunningTime="2026-02-21 08:10:36.488719671 +0000 UTC m=+5011.521803869" Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.512031 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.3056676019999998 podStartE2EDuration="7.512016472s" podCreationTimestamp="2026-02-21 08:10:29 +0000 UTC" firstStartedPulling="2026-02-21 08:10:31.327977338 +0000 UTC m=+5006.361061536" lastFinishedPulling="2026-02-21 08:10:35.534326208 +0000 UTC m=+5010.567410406" observedRunningTime="2026-02-21 08:10:36.505657319 +0000 UTC m=+5011.538741517" watchObservedRunningTime="2026-02-21 08:10:36.512016472 +0000 UTC m=+5011.545100660" Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.790402 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.821646 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.838316 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:37 crc kubenswrapper[4820]: I0221 08:10:37.455396 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"66a6723b-ff49-4d22-a6cd-1e9509165729","Type":"ContainerStarted","Data":"5a4b29aa4b5339175331fec948514136545de1c291628a73d5b27d0a58a5536d"} Feb 21 08:10:37 crc kubenswrapper[4820]: I0221 08:10:37.459110 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"dcf6ab13-da71-49ec-b2dc-27602f1a953f","Type":"ContainerStarted","Data":"e805872eea6169cb25f75792fe1723de590d868a6f22b84e5414513c50c1f7ed"} Feb 21 08:10:37 crc kubenswrapper[4820]: I0221 08:10:37.462435 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6aaa256c-7102-4960-ade0-b903b29b2716","Type":"ContainerStarted","Data":"27f9737f693fe2927ce925215dcf1748116297134a6cb0791814257971733444"} Feb 21 08:10:37 crc kubenswrapper[4820]: I0221 08:10:37.480207 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=5.568446358 podStartE2EDuration="6.480189157s" podCreationTimestamp="2026-02-21 08:10:31 +0000 UTC" firstStartedPulling="2026-02-21 08:10:35.215723888 +0000 UTC m=+5010.248808096" lastFinishedPulling="2026-02-21 08:10:36.127466697 +0000 UTC m=+5011.160550895" observedRunningTime="2026-02-21 08:10:37.473862826 +0000 UTC m=+5012.506947024" watchObservedRunningTime="2026-02-21 08:10:37.480189157 +0000 UTC m=+5012.513273355" Feb 21 08:10:37 crc kubenswrapper[4820]: I0221 08:10:37.497501 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=5.584201405 podStartE2EDuration="6.497484175s" podCreationTimestamp="2026-02-21 08:10:31 +0000 UTC" firstStartedPulling="2026-02-21 08:10:35.211933045 +0000 UTC m=+5010.245017243" lastFinishedPulling="2026-02-21 08:10:36.125215825 +0000 UTC m=+5011.158300013" observedRunningTime="2026-02-21 08:10:37.492829359 +0000 UTC m=+5012.525913567" watchObservedRunningTime="2026-02-21 08:10:37.497484175 +0000 UTC m=+5012.530568373" Feb 21 08:10:37 crc kubenswrapper[4820]: I0221 08:10:37.514695 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.600945627 podStartE2EDuration="6.51467967s" podCreationTimestamp="2026-02-21 08:10:31 +0000 UTC" firstStartedPulling="2026-02-21 08:10:35.211890664 +0000 UTC m=+5010.244974862" lastFinishedPulling="2026-02-21 08:10:36.125624707 +0000 UTC m=+5011.158708905" observedRunningTime="2026-02-21 08:10:37.512411739 +0000 UTC m=+5012.545495957" watchObservedRunningTime="2026-02-21 08:10:37.51467967 +0000 UTC m=+5012.547763868" Feb 21 08:10:38 crc kubenswrapper[4820]: I0221 08:10:38.030605 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:38 crc kubenswrapper[4820]: I0221 08:10:38.045483 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:38 crc kubenswrapper[4820]: I0221 08:10:38.059520 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.030783 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.046188 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.057636 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.075685 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.095730 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.114897 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.824479 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.824852 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.864587 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.864925 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.896115 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.896559 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.570787 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.573613 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.590250 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.767694 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84fbbffdc5-bsfmf"] Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.769118 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.774077 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84fbbffdc5-bsfmf"] Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.779163 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.828151 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-dns-svc\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.828594 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-ovsdbserver-nb\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.828696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh4h4\" (UniqueName: \"kubernetes.io/projected/3911e64b-266d-4c66-9aec-4e26cec73c06-kube-api-access-dh4h4\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.828814 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-config\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.930478 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh4h4\" (UniqueName: \"kubernetes.io/projected/3911e64b-266d-4c66-9aec-4e26cec73c06-kube-api-access-dh4h4\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.930569 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-config\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.930621 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-dns-svc\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.930647 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-ovsdbserver-nb\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.931668 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-ovsdbserver-nb\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.931742 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-dns-svc\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.931796 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-config\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.948113 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh4h4\" (UniqueName: \"kubernetes.io/projected/3911e64b-266d-4c66-9aec-4e26cec73c06-kube-api-access-dh4h4\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:41 crc kubenswrapper[4820]: I0221 08:10:41.094388 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:41 crc kubenswrapper[4820]: I0221 08:10:41.501308 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84fbbffdc5-bsfmf"] Feb 21 08:10:41 crc kubenswrapper[4820]: W0221 08:10:41.513122 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3911e64b_266d_4c66_9aec_4e26cec73c06.slice/crio-957d9ad0a6514e8d9900f14340121c423436536ca7bd144e4d0df81d0dcf5a35 WatchSource:0}: Error finding container 957d9ad0a6514e8d9900f14340121c423436536ca7bd144e4d0df81d0dcf5a35: Status 404 returned error can't find the container with id 957d9ad0a6514e8d9900f14340121c423436536ca7bd144e4d0df81d0dcf5a35 Feb 21 08:10:41 crc kubenswrapper[4820]: I0221 08:10:41.541047 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" event={"ID":"3911e64b-266d-4c66-9aec-4e26cec73c06","Type":"ContainerStarted","Data":"957d9ad0a6514e8d9900f14340121c423436536ca7bd144e4d0df81d0dcf5a35"} Feb 21 08:10:42 crc kubenswrapper[4820]: I0221 08:10:42.548764 4820 generic.go:334] "Generic (PLEG): container finished" podID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerID="e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f" exitCode=0 Feb 21 08:10:42 crc kubenswrapper[4820]: I0221 08:10:42.548900 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" event={"ID":"3911e64b-266d-4c66-9aec-4e26cec73c06","Type":"ContainerDied","Data":"e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f"} Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.067846 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.081658 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.099921 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.353153 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84fbbffdc5-bsfmf"] Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.379415 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59c565c565-4g68w"] Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.392509 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.394589 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.399664 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c565c565-4g68w"] Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.469263 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-nb\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.469370 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-config\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.469399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-296sq\" (UniqueName: \"kubernetes.io/projected/623dbf87-d39f-4026-9aa5-72d52508407b-kube-api-access-296sq\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.469473 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-dns-svc\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.469605 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-sb\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.561606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" event={"ID":"3911e64b-266d-4c66-9aec-4e26cec73c06","Type":"ContainerStarted","Data":"8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626"} Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.562779 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.570653 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-dns-svc\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.570734 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-sb\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.570816 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-nb\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.570897 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-config\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.570924 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-296sq\" (UniqueName: \"kubernetes.io/projected/623dbf87-d39f-4026-9aa5-72d52508407b-kube-api-access-296sq\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.571868 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-dns-svc\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.572007 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-sb\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.572053 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-config\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.572276 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-nb\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.578561 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" podStartSLOduration=3.578547519 podStartE2EDuration="3.578547519s" podCreationTimestamp="2026-02-21 08:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:10:43.578018674 +0000 UTC m=+5018.611102882" watchObservedRunningTime="2026-02-21 08:10:43.578547519 +0000 UTC m=+5018.611631727" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.589285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-296sq\" (UniqueName: \"kubernetes.io/projected/623dbf87-d39f-4026-9aa5-72d52508407b-kube-api-access-296sq\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.714280 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.859388 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.859693 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:10:44 crc kubenswrapper[4820]: I0221 08:10:44.189152 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c565c565-4g68w"] Feb 21 08:10:44 crc kubenswrapper[4820]: I0221 08:10:44.571132 4820 generic.go:334] "Generic (PLEG): container finished" podID="623dbf87-d39f-4026-9aa5-72d52508407b" containerID="f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6" exitCode=0 Feb 21 08:10:44 crc kubenswrapper[4820]: I0221 08:10:44.571433 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c565c565-4g68w" event={"ID":"623dbf87-d39f-4026-9aa5-72d52508407b","Type":"ContainerDied","Data":"f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6"} Feb 21 08:10:44 crc kubenswrapper[4820]: I0221 08:10:44.571598 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c565c565-4g68w" event={"ID":"623dbf87-d39f-4026-9aa5-72d52508407b","Type":"ContainerStarted","Data":"f79446424404530462274952dece2308d0d1ba04fb18b87302d89305eb07556f"} Feb 21 08:10:44 crc kubenswrapper[4820]: I0221 08:10:44.571787 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" podUID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerName="dnsmasq-dns" containerID="cri-o://8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626" gracePeriod=10 Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.013298 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.116958 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-ovsdbserver-nb\") pod \"3911e64b-266d-4c66-9aec-4e26cec73c06\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.117152 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh4h4\" (UniqueName: \"kubernetes.io/projected/3911e64b-266d-4c66-9aec-4e26cec73c06-kube-api-access-dh4h4\") pod \"3911e64b-266d-4c66-9aec-4e26cec73c06\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.117351 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-config\") pod \"3911e64b-266d-4c66-9aec-4e26cec73c06\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.117428 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-dns-svc\") pod \"3911e64b-266d-4c66-9aec-4e26cec73c06\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.121347 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3911e64b-266d-4c66-9aec-4e26cec73c06-kube-api-access-dh4h4" (OuterVolumeSpecName: "kube-api-access-dh4h4") pod "3911e64b-266d-4c66-9aec-4e26cec73c06" (UID: "3911e64b-266d-4c66-9aec-4e26cec73c06"). InnerVolumeSpecName "kube-api-access-dh4h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.153917 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-config" (OuterVolumeSpecName: "config") pod "3911e64b-266d-4c66-9aec-4e26cec73c06" (UID: "3911e64b-266d-4c66-9aec-4e26cec73c06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.154654 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3911e64b-266d-4c66-9aec-4e26cec73c06" (UID: "3911e64b-266d-4c66-9aec-4e26cec73c06"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.156181 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3911e64b-266d-4c66-9aec-4e26cec73c06" (UID: "3911e64b-266d-4c66-9aec-4e26cec73c06"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.219540 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh4h4\" (UniqueName: \"kubernetes.io/projected/3911e64b-266d-4c66-9aec-4e26cec73c06-kube-api-access-dh4h4\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.219581 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.219591 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.219603 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.430688 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Feb 21 08:10:45 crc kubenswrapper[4820]: E0221 08:10:45.431074 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerName="init" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.431096 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerName="init" Feb 21 08:10:45 crc kubenswrapper[4820]: E0221 08:10:45.431112 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerName="dnsmasq-dns" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.431119 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerName="dnsmasq-dns" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.431275 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerName="dnsmasq-dns" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.431896 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.434801 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.445914 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.525611 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gxxk\" (UniqueName: \"kubernetes.io/projected/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-kube-api-access-7gxxk\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.525725 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.525803 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.581371 4820 generic.go:334] "Generic (PLEG): container finished" podID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerID="8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626" exitCode=0 Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.581447 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.581458 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" event={"ID":"3911e64b-266d-4c66-9aec-4e26cec73c06","Type":"ContainerDied","Data":"8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626"} Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.581506 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" event={"ID":"3911e64b-266d-4c66-9aec-4e26cec73c06","Type":"ContainerDied","Data":"957d9ad0a6514e8d9900f14340121c423436536ca7bd144e4d0df81d0dcf5a35"} Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.581525 4820 scope.go:117] "RemoveContainer" containerID="8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.585491 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c565c565-4g68w" event={"ID":"623dbf87-d39f-4026-9aa5-72d52508407b","Type":"ContainerStarted","Data":"af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156"} Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.585665 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.607555 4820 scope.go:117] "RemoveContainer" containerID="e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.614382 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59c565c565-4g68w" podStartSLOduration=2.614332501 podStartE2EDuration="2.614332501s" podCreationTimestamp="2026-02-21 08:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:10:45.61136091 +0000 UTC m=+5020.644445118" watchObservedRunningTime="2026-02-21 08:10:45.614332501 +0000 UTC m=+5020.647416689" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.628351 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.628455 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gxxk\" (UniqueName: \"kubernetes.io/projected/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-kube-api-access-7gxxk\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.628503 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.637889 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.638310 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.638424 4820 scope.go:117] "RemoveContainer" containerID="8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.638415 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6e5c2adb17362cb38b9613e55900aac4eb2dcd2074de3a1084943e2c54cd00e8/globalmount\"" pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: E0221 08:10:45.638851 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626\": container with ID starting with 8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626 not found: ID does not exist" containerID="8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.638893 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626"} err="failed to get container status \"8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626\": rpc error: code = NotFound desc = could not find container \"8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626\": container with ID starting with 8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626 not found: ID does not exist" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.638921 4820 scope.go:117] "RemoveContainer" containerID="e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f" Feb 21 08:10:45 crc kubenswrapper[4820]: E0221 08:10:45.639490 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f\": container with ID starting with e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f not found: ID does not exist" containerID="e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.639549 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f"} err="failed to get container status \"e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f\": rpc error: code = NotFound desc = could not find container \"e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f\": container with ID starting with e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f not found: ID does not exist" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.642694 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84fbbffdc5-bsfmf"] Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.649464 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84fbbffdc5-bsfmf"] Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.653302 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gxxk\" (UniqueName: \"kubernetes.io/projected/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-kube-api-access-7gxxk\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.677815 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.707389 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3911e64b-266d-4c66-9aec-4e26cec73c06" path="/var/lib/kubelet/pods/3911e64b-266d-4c66-9aec-4e26cec73c06/volumes" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.751739 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 21 08:10:46 crc kubenswrapper[4820]: I0221 08:10:46.295303 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Feb 21 08:10:46 crc kubenswrapper[4820]: I0221 08:10:46.594398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"0aeb2e3c-2741-4cfb-ae99-d7f696b69490","Type":"ContainerStarted","Data":"8dd551c3890db1e73ddd2531407ed1073b385c0ce262dc89304db8e225ef25b4"} Feb 21 08:10:47 crc kubenswrapper[4820]: I0221 08:10:47.602670 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"0aeb2e3c-2741-4cfb-ae99-d7f696b69490","Type":"ContainerStarted","Data":"7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8"} Feb 21 08:10:47 crc kubenswrapper[4820]: I0221 08:10:47.617807 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.432816303 podStartE2EDuration="3.617789588s" podCreationTimestamp="2026-02-21 08:10:44 +0000 UTC" firstStartedPulling="2026-02-21 08:10:46.303070716 +0000 UTC m=+5021.336154914" lastFinishedPulling="2026-02-21 08:10:46.488044001 +0000 UTC m=+5021.521128199" observedRunningTime="2026-02-21 08:10:47.615740492 +0000 UTC m=+5022.648824690" watchObservedRunningTime="2026-02-21 08:10:47.617789588 +0000 UTC m=+5022.650873776" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.744279 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.746281 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.748210 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.748497 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.748733 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.748911 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-trcmc" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.759005 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.840155 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9b120b4-ea8d-499d-a8ca-43faa31f000e-scripts\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.840191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.840217 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.840480 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9b120b4-ea8d-499d-a8ca-43faa31f000e-config\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.840558 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9b120b4-ea8d-499d-a8ca-43faa31f000e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.840838 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx5ww\" (UniqueName: \"kubernetes.io/projected/f9b120b4-ea8d-499d-a8ca-43faa31f000e-kube-api-access-dx5ww\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.840900 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.942445 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9b120b4-ea8d-499d-a8ca-43faa31f000e-scripts\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.942517 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.942550 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.942598 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9b120b4-ea8d-499d-a8ca-43faa31f000e-config\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.942648 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9b120b4-ea8d-499d-a8ca-43faa31f000e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.942743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx5ww\" (UniqueName: \"kubernetes.io/projected/f9b120b4-ea8d-499d-a8ca-43faa31f000e-kube-api-access-dx5ww\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.942782 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.943468 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9b120b4-ea8d-499d-a8ca-43faa31f000e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.943650 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9b120b4-ea8d-499d-a8ca-43faa31f000e-scripts\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.943713 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9b120b4-ea8d-499d-a8ca-43faa31f000e-config\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.949326 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.951605 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.955209 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.966048 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx5ww\" (UniqueName: \"kubernetes.io/projected/f9b120b4-ea8d-499d-a8ca-43faa31f000e-kube-api-access-dx5ww\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:53 crc kubenswrapper[4820]: I0221 08:10:53.113905 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 21 08:10:53 crc kubenswrapper[4820]: I0221 08:10:53.565553 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 21 08:10:53 crc kubenswrapper[4820]: W0221 08:10:53.572704 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b120b4_ea8d_499d_a8ca_43faa31f000e.slice/crio-775c25c262f8f07f427662f3a70d59b379c47377d246d25aee6f7158fd5cb38d WatchSource:0}: Error finding container 775c25c262f8f07f427662f3a70d59b379c47377d246d25aee6f7158fd5cb38d: Status 404 returned error can't find the container with id 775c25c262f8f07f427662f3a70d59b379c47377d246d25aee6f7158fd5cb38d Feb 21 08:10:53 crc kubenswrapper[4820]: I0221 08:10:53.646486 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f9b120b4-ea8d-499d-a8ca-43faa31f000e","Type":"ContainerStarted","Data":"775c25c262f8f07f427662f3a70d59b379c47377d246d25aee6f7158fd5cb38d"} Feb 21 08:10:53 crc kubenswrapper[4820]: I0221 08:10:53.715382 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:53 crc kubenswrapper[4820]: I0221 08:10:53.788123 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79496f79cc-zmwwz"] Feb 21 08:10:53 crc kubenswrapper[4820]: I0221 08:10:53.788365 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" podUID="8165e702-d96e-4273-8536-7e6e363482d4" containerName="dnsmasq-dns" containerID="cri-o://6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62" gracePeriod=10 Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.382522 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.488774 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-config\") pod \"8165e702-d96e-4273-8536-7e6e363482d4\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.488870 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvvn2\" (UniqueName: \"kubernetes.io/projected/8165e702-d96e-4273-8536-7e6e363482d4-kube-api-access-dvvn2\") pod \"8165e702-d96e-4273-8536-7e6e363482d4\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.488906 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-dns-svc\") pod \"8165e702-d96e-4273-8536-7e6e363482d4\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.495673 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8165e702-d96e-4273-8536-7e6e363482d4-kube-api-access-dvvn2" (OuterVolumeSpecName: "kube-api-access-dvvn2") pod "8165e702-d96e-4273-8536-7e6e363482d4" (UID: "8165e702-d96e-4273-8536-7e6e363482d4"). InnerVolumeSpecName "kube-api-access-dvvn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.529960 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-config" (OuterVolumeSpecName: "config") pod "8165e702-d96e-4273-8536-7e6e363482d4" (UID: "8165e702-d96e-4273-8536-7e6e363482d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.537411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8165e702-d96e-4273-8536-7e6e363482d4" (UID: "8165e702-d96e-4273-8536-7e6e363482d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.590960 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.591010 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvvn2\" (UniqueName: \"kubernetes.io/projected/8165e702-d96e-4273-8536-7e6e363482d4-kube-api-access-dvvn2\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.591025 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.658138 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f9b120b4-ea8d-499d-a8ca-43faa31f000e","Type":"ContainerStarted","Data":"9afdb48844b3ad1c0e7a303a434c5fd3ff0eb1584d240bd465041435e7b5bcc5"} Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.660519 4820 generic.go:334] "Generic (PLEG): container finished" podID="8165e702-d96e-4273-8536-7e6e363482d4" containerID="6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62" exitCode=0 Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.660693 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" event={"ID":"8165e702-d96e-4273-8536-7e6e363482d4","Type":"ContainerDied","Data":"6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62"} Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.661404 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" event={"ID":"8165e702-d96e-4273-8536-7e6e363482d4","Type":"ContainerDied","Data":"41d8b8dbadec17fbbc4f67602cdb951273b7c33c0b12dcd66df04c7b23b9452c"} Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.661440 4820 scope.go:117] "RemoveContainer" containerID="6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.660744 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.689440 4820 scope.go:117] "RemoveContainer" containerID="e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.692297 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79496f79cc-zmwwz"] Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.724370 4820 scope.go:117] "RemoveContainer" containerID="6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62" Feb 21 08:10:54 crc kubenswrapper[4820]: E0221 08:10:54.724837 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62\": container with ID starting with 6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62 not found: ID does not exist" containerID="6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.724871 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62"} err="failed to get container status \"6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62\": rpc error: code = NotFound desc = could not find container \"6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62\": container with ID starting with 6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62 not found: ID does not exist" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.724891 4820 scope.go:117] "RemoveContainer" containerID="e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.725031 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79496f79cc-zmwwz"] Feb 21 08:10:54 crc kubenswrapper[4820]: E0221 08:10:54.725304 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a\": container with ID starting with e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a not found: ID does not exist" containerID="e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.725354 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a"} err="failed to get container status \"e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a\": rpc error: code = NotFound desc = could not find container \"e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a\": container with ID starting with e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a not found: ID does not exist" Feb 21 08:10:55 crc kubenswrapper[4820]: I0221 08:10:55.669113 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f9b120b4-ea8d-499d-a8ca-43faa31f000e","Type":"ContainerStarted","Data":"9a6065d2d09784afcdd78a1d0210c15622686db5b1c2522b9d329bf49c577286"} Feb 21 08:10:55 crc kubenswrapper[4820]: I0221 08:10:55.669783 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 21 08:10:55 crc kubenswrapper[4820]: I0221 08:10:55.693988 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.835659679 podStartE2EDuration="3.693965732s" podCreationTimestamp="2026-02-21 08:10:52 +0000 UTC" firstStartedPulling="2026-02-21 08:10:53.575563805 +0000 UTC m=+5028.608648003" lastFinishedPulling="2026-02-21 08:10:54.433869858 +0000 UTC m=+5029.466954056" observedRunningTime="2026-02-21 08:10:55.689328106 +0000 UTC m=+5030.722412324" watchObservedRunningTime="2026-02-21 08:10:55.693965732 +0000 UTC m=+5030.727049930" Feb 21 08:10:55 crc kubenswrapper[4820]: I0221 08:10:55.708626 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8165e702-d96e-4273-8536-7e6e363482d4" path="/var/lib/kubelet/pods/8165e702-d96e-4273-8536-7e6e363482d4/volumes" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.715473 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-l4whm"] Feb 21 08:10:57 crc kubenswrapper[4820]: E0221 08:10:57.717051 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8165e702-d96e-4273-8536-7e6e363482d4" containerName="dnsmasq-dns" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.717152 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8165e702-d96e-4273-8536-7e6e363482d4" containerName="dnsmasq-dns" Feb 21 08:10:57 crc kubenswrapper[4820]: E0221 08:10:57.717320 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8165e702-d96e-4273-8536-7e6e363482d4" containerName="init" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.717415 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8165e702-d96e-4273-8536-7e6e363482d4" containerName="init" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.717674 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8165e702-d96e-4273-8536-7e6e363482d4" containerName="dnsmasq-dns" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.718342 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-l4whm"] Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.718549 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.741121 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-994zn\" (UniqueName: \"kubernetes.io/projected/8d64f747-d529-4e8f-b2ea-11458f16f00c-kube-api-access-994zn\") pod \"keystone-db-create-l4whm\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.741400 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d64f747-d529-4e8f-b2ea-11458f16f00c-operator-scripts\") pod \"keystone-db-create-l4whm\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.803446 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a50c-account-create-update-p6g4x"] Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.805355 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.808214 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.812510 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a50c-account-create-update-p6g4x"] Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.843071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjt65\" (UniqueName: \"kubernetes.io/projected/e41e7890-6ac4-4d64-aded-2e5934d7ceee-kube-api-access-qjt65\") pod \"keystone-a50c-account-create-update-p6g4x\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.843131 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d64f747-d529-4e8f-b2ea-11458f16f00c-operator-scripts\") pod \"keystone-db-create-l4whm\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.843204 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-994zn\" (UniqueName: \"kubernetes.io/projected/8d64f747-d529-4e8f-b2ea-11458f16f00c-kube-api-access-994zn\") pod \"keystone-db-create-l4whm\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.843422 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e41e7890-6ac4-4d64-aded-2e5934d7ceee-operator-scripts\") pod \"keystone-a50c-account-create-update-p6g4x\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.845041 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d64f747-d529-4e8f-b2ea-11458f16f00c-operator-scripts\") pod \"keystone-db-create-l4whm\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.863796 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-994zn\" (UniqueName: \"kubernetes.io/projected/8d64f747-d529-4e8f-b2ea-11458f16f00c-kube-api-access-994zn\") pod \"keystone-db-create-l4whm\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.945189 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e41e7890-6ac4-4d64-aded-2e5934d7ceee-operator-scripts\") pod \"keystone-a50c-account-create-update-p6g4x\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.945287 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjt65\" (UniqueName: \"kubernetes.io/projected/e41e7890-6ac4-4d64-aded-2e5934d7ceee-kube-api-access-qjt65\") pod \"keystone-a50c-account-create-update-p6g4x\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.945956 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e41e7890-6ac4-4d64-aded-2e5934d7ceee-operator-scripts\") pod \"keystone-a50c-account-create-update-p6g4x\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.965032 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjt65\" (UniqueName: \"kubernetes.io/projected/e41e7890-6ac4-4d64-aded-2e5934d7ceee-kube-api-access-qjt65\") pod \"keystone-a50c-account-create-update-p6g4x\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.038619 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.124157 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.554063 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-l4whm"] Feb 21 08:10:58 crc kubenswrapper[4820]: W0221 08:10:58.558042 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d64f747_d529_4e8f_b2ea_11458f16f00c.slice/crio-e00d15e13327c7cdcd9149f7d7111a91f8d99512eb99938b91e6226851b65167 WatchSource:0}: Error finding container e00d15e13327c7cdcd9149f7d7111a91f8d99512eb99938b91e6226851b65167: Status 404 returned error can't find the container with id e00d15e13327c7cdcd9149f7d7111a91f8d99512eb99938b91e6226851b65167 Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.619660 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a50c-account-create-update-p6g4x"] Feb 21 08:10:58 crc kubenswrapper[4820]: W0221 08:10:58.621682 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode41e7890_6ac4_4d64_aded_2e5934d7ceee.slice/crio-abe1cc7d0d78192b7e40d7db6b1080f8154648cbe94b684551bdf9dfe36b5557 WatchSource:0}: Error finding container abe1cc7d0d78192b7e40d7db6b1080f8154648cbe94b684551bdf9dfe36b5557: Status 404 returned error can't find the container with id abe1cc7d0d78192b7e40d7db6b1080f8154648cbe94b684551bdf9dfe36b5557 Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.702675 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a50c-account-create-update-p6g4x" event={"ID":"e41e7890-6ac4-4d64-aded-2e5934d7ceee","Type":"ContainerStarted","Data":"abe1cc7d0d78192b7e40d7db6b1080f8154648cbe94b684551bdf9dfe36b5557"} Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.705428 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l4whm" event={"ID":"8d64f747-d529-4e8f-b2ea-11458f16f00c","Type":"ContainerStarted","Data":"ad8c79ff3c8cfe106b6b55f544a31e4702e2207d0c03fa3122046a370bf5ac97"} Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.705463 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l4whm" event={"ID":"8d64f747-d529-4e8f-b2ea-11458f16f00c","Type":"ContainerStarted","Data":"e00d15e13327c7cdcd9149f7d7111a91f8d99512eb99938b91e6226851b65167"} Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.720843 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-l4whm" podStartSLOduration=1.720825999 podStartE2EDuration="1.720825999s" podCreationTimestamp="2026-02-21 08:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:10:58.718003132 +0000 UTC m=+5033.751087330" watchObservedRunningTime="2026-02-21 08:10:58.720825999 +0000 UTC m=+5033.753910197" Feb 21 08:10:59 crc kubenswrapper[4820]: I0221 08:10:59.711555 4820 generic.go:334] "Generic (PLEG): container finished" podID="e41e7890-6ac4-4d64-aded-2e5934d7ceee" containerID="afe15da7c9744a1622ba946b0a8f2cad964248c6e6556d307d9afb8803cea6fb" exitCode=0 Feb 21 08:10:59 crc kubenswrapper[4820]: I0221 08:10:59.711623 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a50c-account-create-update-p6g4x" event={"ID":"e41e7890-6ac4-4d64-aded-2e5934d7ceee","Type":"ContainerDied","Data":"afe15da7c9744a1622ba946b0a8f2cad964248c6e6556d307d9afb8803cea6fb"} Feb 21 08:10:59 crc kubenswrapper[4820]: I0221 08:10:59.713591 4820 generic.go:334] "Generic (PLEG): container finished" podID="8d64f747-d529-4e8f-b2ea-11458f16f00c" containerID="ad8c79ff3c8cfe106b6b55f544a31e4702e2207d0c03fa3122046a370bf5ac97" exitCode=0 Feb 21 08:10:59 crc kubenswrapper[4820]: I0221 08:10:59.713701 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l4whm" event={"ID":"8d64f747-d529-4e8f-b2ea-11458f16f00c","Type":"ContainerDied","Data":"ad8c79ff3c8cfe106b6b55f544a31e4702e2207d0c03fa3122046a370bf5ac97"} Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.140794 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.146768 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l4whm" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.200984 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d64f747-d529-4e8f-b2ea-11458f16f00c-operator-scripts\") pod \"8d64f747-d529-4e8f-b2ea-11458f16f00c\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.201269 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e41e7890-6ac4-4d64-aded-2e5934d7ceee-operator-scripts\") pod \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.201355 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-994zn\" (UniqueName: \"kubernetes.io/projected/8d64f747-d529-4e8f-b2ea-11458f16f00c-kube-api-access-994zn\") pod \"8d64f747-d529-4e8f-b2ea-11458f16f00c\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.201503 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjt65\" (UniqueName: \"kubernetes.io/projected/e41e7890-6ac4-4d64-aded-2e5934d7ceee-kube-api-access-qjt65\") pod \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.202074 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41e7890-6ac4-4d64-aded-2e5934d7ceee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e41e7890-6ac4-4d64-aded-2e5934d7ceee" (UID: "e41e7890-6ac4-4d64-aded-2e5934d7ceee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.202124 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d64f747-d529-4e8f-b2ea-11458f16f00c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d64f747-d529-4e8f-b2ea-11458f16f00c" (UID: "8d64f747-d529-4e8f-b2ea-11458f16f00c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.210487 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d64f747-d529-4e8f-b2ea-11458f16f00c-kube-api-access-994zn" (OuterVolumeSpecName: "kube-api-access-994zn") pod "8d64f747-d529-4e8f-b2ea-11458f16f00c" (UID: "8d64f747-d529-4e8f-b2ea-11458f16f00c"). InnerVolumeSpecName "kube-api-access-994zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.210527 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41e7890-6ac4-4d64-aded-2e5934d7ceee-kube-api-access-qjt65" (OuterVolumeSpecName: "kube-api-access-qjt65") pod "e41e7890-6ac4-4d64-aded-2e5934d7ceee" (UID: "e41e7890-6ac4-4d64-aded-2e5934d7ceee"). InnerVolumeSpecName "kube-api-access-qjt65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.303279 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d64f747-d529-4e8f-b2ea-11458f16f00c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.303315 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e41e7890-6ac4-4d64-aded-2e5934d7ceee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.303324 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-994zn\" (UniqueName: \"kubernetes.io/projected/8d64f747-d529-4e8f-b2ea-11458f16f00c-kube-api-access-994zn\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.303335 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjt65\" (UniqueName: \"kubernetes.io/projected/e41e7890-6ac4-4d64-aded-2e5934d7ceee-kube-api-access-qjt65\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.743379 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l4whm" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.743601 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l4whm" event={"ID":"8d64f747-d529-4e8f-b2ea-11458f16f00c","Type":"ContainerDied","Data":"e00d15e13327c7cdcd9149f7d7111a91f8d99512eb99938b91e6226851b65167"} Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.744012 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e00d15e13327c7cdcd9149f7d7111a91f8d99512eb99938b91e6226851b65167" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.746461 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.746432 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a50c-account-create-update-p6g4x" event={"ID":"e41e7890-6ac4-4d64-aded-2e5934d7ceee","Type":"ContainerDied","Data":"abe1cc7d0d78192b7e40d7db6b1080f8154648cbe94b684551bdf9dfe36b5557"} Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.746663 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abe1cc7d0d78192b7e40d7db6b1080f8154648cbe94b684551bdf9dfe36b5557" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.427252 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-spcxr"] Feb 21 08:11:03 crc kubenswrapper[4820]: E0221 08:11:03.427657 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d64f747-d529-4e8f-b2ea-11458f16f00c" containerName="mariadb-database-create" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.427675 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d64f747-d529-4e8f-b2ea-11458f16f00c" containerName="mariadb-database-create" Feb 21 08:11:03 crc kubenswrapper[4820]: E0221 08:11:03.427692 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41e7890-6ac4-4d64-aded-2e5934d7ceee" containerName="mariadb-account-create-update" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.427699 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41e7890-6ac4-4d64-aded-2e5934d7ceee" containerName="mariadb-account-create-update" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.427877 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41e7890-6ac4-4d64-aded-2e5934d7ceee" containerName="mariadb-account-create-update" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.427903 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d64f747-d529-4e8f-b2ea-11458f16f00c" containerName="mariadb-database-create" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.428563 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.434124 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-spcxr"] Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.435104 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.435113 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.435620 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-48tx9" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.441137 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.546472 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-combined-ca-bundle\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.546525 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz4dw\" (UniqueName: \"kubernetes.io/projected/211ff6a9-0360-4606-92ca-cd4904494ff6-kube-api-access-dz4dw\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.546570 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-config-data\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.647981 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-combined-ca-bundle\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.648034 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz4dw\" (UniqueName: \"kubernetes.io/projected/211ff6a9-0360-4606-92ca-cd4904494ff6-kube-api-access-dz4dw\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.648073 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-config-data\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.656693 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-config-data\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.666112 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-combined-ca-bundle\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.673176 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz4dw\" (UniqueName: \"kubernetes.io/projected/211ff6a9-0360-4606-92ca-cd4904494ff6-kube-api-access-dz4dw\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.746110 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:04 crc kubenswrapper[4820]: I0221 08:11:04.295459 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-spcxr"] Feb 21 08:11:04 crc kubenswrapper[4820]: I0221 08:11:04.771416 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-spcxr" event={"ID":"211ff6a9-0360-4606-92ca-cd4904494ff6","Type":"ContainerStarted","Data":"e032efda30137eb490834e6dc85e9de283414db4a3a67a9c09d38739a3eb83b1"} Feb 21 08:11:12 crc kubenswrapper[4820]: I0221 08:11:12.832564 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-spcxr" event={"ID":"211ff6a9-0360-4606-92ca-cd4904494ff6","Type":"ContainerStarted","Data":"07d05dac62d0d1c533879d6419da2299dd9fef179fec90922352947180eea373"} Feb 21 08:11:12 crc kubenswrapper[4820]: I0221 08:11:12.857275 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-spcxr" podStartSLOduration=2.379123751 podStartE2EDuration="9.857233534s" podCreationTimestamp="2026-02-21 08:11:03 +0000 UTC" firstStartedPulling="2026-02-21 08:11:04.303904419 +0000 UTC m=+5039.336988627" lastFinishedPulling="2026-02-21 08:11:11.782014212 +0000 UTC m=+5046.815098410" observedRunningTime="2026-02-21 08:11:12.85339149 +0000 UTC m=+5047.886475698" watchObservedRunningTime="2026-02-21 08:11:12.857233534 +0000 UTC m=+5047.890317742" Feb 21 08:11:13 crc kubenswrapper[4820]: I0221 08:11:13.177634 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 21 08:11:13 crc kubenswrapper[4820]: I0221 08:11:13.816531 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:11:13 crc kubenswrapper[4820]: I0221 08:11:13.816621 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:11:13 crc kubenswrapper[4820]: I0221 08:11:13.843650 4820 generic.go:334] "Generic (PLEG): container finished" podID="211ff6a9-0360-4606-92ca-cd4904494ff6" containerID="07d05dac62d0d1c533879d6419da2299dd9fef179fec90922352947180eea373" exitCode=0 Feb 21 08:11:13 crc kubenswrapper[4820]: I0221 08:11:13.843718 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-spcxr" event={"ID":"211ff6a9-0360-4606-92ca-cd4904494ff6","Type":"ContainerDied","Data":"07d05dac62d0d1c533879d6419da2299dd9fef179fec90922352947180eea373"} Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.180733 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.245431 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz4dw\" (UniqueName: \"kubernetes.io/projected/211ff6a9-0360-4606-92ca-cd4904494ff6-kube-api-access-dz4dw\") pod \"211ff6a9-0360-4606-92ca-cd4904494ff6\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.245661 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-combined-ca-bundle\") pod \"211ff6a9-0360-4606-92ca-cd4904494ff6\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.245865 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-config-data\") pod \"211ff6a9-0360-4606-92ca-cd4904494ff6\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.252813 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211ff6a9-0360-4606-92ca-cd4904494ff6-kube-api-access-dz4dw" (OuterVolumeSpecName: "kube-api-access-dz4dw") pod "211ff6a9-0360-4606-92ca-cd4904494ff6" (UID: "211ff6a9-0360-4606-92ca-cd4904494ff6"). InnerVolumeSpecName "kube-api-access-dz4dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.273448 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "211ff6a9-0360-4606-92ca-cd4904494ff6" (UID: "211ff6a9-0360-4606-92ca-cd4904494ff6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.296462 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-config-data" (OuterVolumeSpecName: "config-data") pod "211ff6a9-0360-4606-92ca-cd4904494ff6" (UID: "211ff6a9-0360-4606-92ca-cd4904494ff6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.346944 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.346987 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.347002 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz4dw\" (UniqueName: \"kubernetes.io/projected/211ff6a9-0360-4606-92ca-cd4904494ff6-kube-api-access-dz4dw\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.858659 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-spcxr" event={"ID":"211ff6a9-0360-4606-92ca-cd4904494ff6","Type":"ContainerDied","Data":"e032efda30137eb490834e6dc85e9de283414db4a3a67a9c09d38739a3eb83b1"} Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.858967 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e032efda30137eb490834e6dc85e9de283414db4a3a67a9c09d38739a3eb83b1" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.858702 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.084001 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6776586657-khcd6"] Feb 21 08:11:16 crc kubenswrapper[4820]: E0221 08:11:16.084453 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211ff6a9-0360-4606-92ca-cd4904494ff6" containerName="keystone-db-sync" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.084477 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="211ff6a9-0360-4606-92ca-cd4904494ff6" containerName="keystone-db-sync" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.084698 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="211ff6a9-0360-4606-92ca-cd4904494ff6" containerName="keystone-db-sync" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.085821 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.122428 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6776586657-khcd6"] Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.158681 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-sb\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.158764 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-dns-svc\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.158948 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-config\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.158997 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpt7k\" (UniqueName: \"kubernetes.io/projected/805ecde9-528b-45f4-a438-42c7799bab7b-kube-api-access-hpt7k\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.159095 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-nb\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.189660 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lqg8w"] Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.190704 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.193296 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.204675 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.204933 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-48tx9" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.205087 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.205260 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.265315 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lqg8w"] Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266513 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-scripts\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266550 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-fernet-keys\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266588 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-config\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266612 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpt7k\" (UniqueName: \"kubernetes.io/projected/805ecde9-528b-45f4-a438-42c7799bab7b-kube-api-access-hpt7k\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266643 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-nb\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266662 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkgjr\" (UniqueName: \"kubernetes.io/projected/db765426-53ee-4c41-a313-5ddf7591b6a9-kube-api-access-pkgjr\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266683 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-combined-ca-bundle\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266733 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-sb\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266750 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-config-data\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266774 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-credential-keys\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266809 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-dns-svc\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.267760 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-dns-svc\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.268377 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-sb\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.269032 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-config\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.287600 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-nb\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.345337 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpt7k\" (UniqueName: \"kubernetes.io/projected/805ecde9-528b-45f4-a438-42c7799bab7b-kube-api-access-hpt7k\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.371411 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-config-data\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.371468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-credential-keys\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.371537 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-scripts\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.371556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-fernet-keys\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.371599 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkgjr\" (UniqueName: \"kubernetes.io/projected/db765426-53ee-4c41-a313-5ddf7591b6a9-kube-api-access-pkgjr\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.371617 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-combined-ca-bundle\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.376227 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-combined-ca-bundle\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.376524 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-config-data\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.385094 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-scripts\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.385169 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-fernet-keys\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.385500 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-credential-keys\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.401410 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.402719 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkgjr\" (UniqueName: \"kubernetes.io/projected/db765426-53ee-4c41-a313-5ddf7591b6a9-kube-api-access-pkgjr\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.550138 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.871660 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6776586657-khcd6"] Feb 21 08:11:17 crc kubenswrapper[4820]: I0221 08:11:17.022634 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lqg8w"] Feb 21 08:11:17 crc kubenswrapper[4820]: W0221 08:11:17.026707 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb765426_53ee_4c41_a313_5ddf7591b6a9.slice/crio-760770bb265c6623de57547a6108c0923b44cb58142576b97d32953c12424cb4 WatchSource:0}: Error finding container 760770bb265c6623de57547a6108c0923b44cb58142576b97d32953c12424cb4: Status 404 returned error can't find the container with id 760770bb265c6623de57547a6108c0923b44cb58142576b97d32953c12424cb4 Feb 21 08:11:17 crc kubenswrapper[4820]: I0221 08:11:17.887187 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqg8w" event={"ID":"db765426-53ee-4c41-a313-5ddf7591b6a9","Type":"ContainerStarted","Data":"1b8f99fcda2042506493b66359457c8391b7f432d8588bbaf5a6223727d8c557"} Feb 21 08:11:17 crc kubenswrapper[4820]: I0221 08:11:17.887560 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqg8w" event={"ID":"db765426-53ee-4c41-a313-5ddf7591b6a9","Type":"ContainerStarted","Data":"760770bb265c6623de57547a6108c0923b44cb58142576b97d32953c12424cb4"} Feb 21 08:11:17 crc kubenswrapper[4820]: I0221 08:11:17.890432 4820 generic.go:334] "Generic (PLEG): container finished" podID="805ecde9-528b-45f4-a438-42c7799bab7b" containerID="057a9fc88ae8a1df7b41fb4caaf76a1bf24268155aeabdcb9c5614189e8f2e4c" exitCode=0 Feb 21 08:11:17 crc kubenswrapper[4820]: I0221 08:11:17.890474 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6776586657-khcd6" event={"ID":"805ecde9-528b-45f4-a438-42c7799bab7b","Type":"ContainerDied","Data":"057a9fc88ae8a1df7b41fb4caaf76a1bf24268155aeabdcb9c5614189e8f2e4c"} Feb 21 08:11:17 crc kubenswrapper[4820]: I0221 08:11:17.890499 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6776586657-khcd6" event={"ID":"805ecde9-528b-45f4-a438-42c7799bab7b","Type":"ContainerStarted","Data":"db76e306f3b16314f5537d5d9c142291f2db91510eba8dc788421390eb44ddd6"} Feb 21 08:11:17 crc kubenswrapper[4820]: I0221 08:11:17.906484 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lqg8w" podStartSLOduration=1.9064668299999998 podStartE2EDuration="1.90646683s" podCreationTimestamp="2026-02-21 08:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:11:17.905624798 +0000 UTC m=+5052.938708996" watchObservedRunningTime="2026-02-21 08:11:17.90646683 +0000 UTC m=+5052.939551028" Feb 21 08:11:18 crc kubenswrapper[4820]: I0221 08:11:18.245571 4820 scope.go:117] "RemoveContainer" containerID="eb27aecc6ecdd33121cbb1ef730b34658946fa8c269080b0841bca37cd76c02f" Feb 21 08:11:18 crc kubenswrapper[4820]: I0221 08:11:18.900626 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6776586657-khcd6" event={"ID":"805ecde9-528b-45f4-a438-42c7799bab7b","Type":"ContainerStarted","Data":"8f215298561c4a58c13338e8e9d0bb05dbf28f207c7aca70826053c4615fb983"} Feb 21 08:11:18 crc kubenswrapper[4820]: I0221 08:11:18.900698 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:18 crc kubenswrapper[4820]: I0221 08:11:18.926640 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6776586657-khcd6" podStartSLOduration=2.926623302 podStartE2EDuration="2.926623302s" podCreationTimestamp="2026-02-21 08:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:11:18.923083596 +0000 UTC m=+5053.956167794" watchObservedRunningTime="2026-02-21 08:11:18.926623302 +0000 UTC m=+5053.959707500" Feb 21 08:11:21 crc kubenswrapper[4820]: I0221 08:11:21.937269 4820 generic.go:334] "Generic (PLEG): container finished" podID="db765426-53ee-4c41-a313-5ddf7591b6a9" containerID="1b8f99fcda2042506493b66359457c8391b7f432d8588bbaf5a6223727d8c557" exitCode=0 Feb 21 08:11:21 crc kubenswrapper[4820]: I0221 08:11:21.937350 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqg8w" event={"ID":"db765426-53ee-4c41-a313-5ddf7591b6a9","Type":"ContainerDied","Data":"1b8f99fcda2042506493b66359457c8391b7f432d8588bbaf5a6223727d8c557"} Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.286012 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.386202 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkgjr\" (UniqueName: \"kubernetes.io/projected/db765426-53ee-4c41-a313-5ddf7591b6a9-kube-api-access-pkgjr\") pod \"db765426-53ee-4c41-a313-5ddf7591b6a9\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.386387 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-config-data\") pod \"db765426-53ee-4c41-a313-5ddf7591b6a9\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.386435 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-combined-ca-bundle\") pod \"db765426-53ee-4c41-a313-5ddf7591b6a9\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.386557 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-credential-keys\") pod \"db765426-53ee-4c41-a313-5ddf7591b6a9\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.386576 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-fernet-keys\") pod \"db765426-53ee-4c41-a313-5ddf7591b6a9\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.386595 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-scripts\") pod \"db765426-53ee-4c41-a313-5ddf7591b6a9\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.391735 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "db765426-53ee-4c41-a313-5ddf7591b6a9" (UID: "db765426-53ee-4c41-a313-5ddf7591b6a9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.392037 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-scripts" (OuterVolumeSpecName: "scripts") pod "db765426-53ee-4c41-a313-5ddf7591b6a9" (UID: "db765426-53ee-4c41-a313-5ddf7591b6a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.392197 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db765426-53ee-4c41-a313-5ddf7591b6a9-kube-api-access-pkgjr" (OuterVolumeSpecName: "kube-api-access-pkgjr") pod "db765426-53ee-4c41-a313-5ddf7591b6a9" (UID: "db765426-53ee-4c41-a313-5ddf7591b6a9"). InnerVolumeSpecName "kube-api-access-pkgjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.393516 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "db765426-53ee-4c41-a313-5ddf7591b6a9" (UID: "db765426-53ee-4c41-a313-5ddf7591b6a9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.410655 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db765426-53ee-4c41-a313-5ddf7591b6a9" (UID: "db765426-53ee-4c41-a313-5ddf7591b6a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.411194 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-config-data" (OuterVolumeSpecName: "config-data") pod "db765426-53ee-4c41-a313-5ddf7591b6a9" (UID: "db765426-53ee-4c41-a313-5ddf7591b6a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.488526 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkgjr\" (UniqueName: \"kubernetes.io/projected/db765426-53ee-4c41-a313-5ddf7591b6a9-kube-api-access-pkgjr\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.488574 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.488586 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.488595 4820 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.488604 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.488613 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.956331 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqg8w" event={"ID":"db765426-53ee-4c41-a313-5ddf7591b6a9","Type":"ContainerDied","Data":"760770bb265c6623de57547a6108c0923b44cb58142576b97d32953c12424cb4"} Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.956375 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="760770bb265c6623de57547a6108c0923b44cb58142576b97d32953c12424cb4" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.956403 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.025898 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lqg8w"] Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.031739 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lqg8w"] Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.124460 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cf89p"] Feb 21 08:11:24 crc kubenswrapper[4820]: E0221 08:11:24.125165 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db765426-53ee-4c41-a313-5ddf7591b6a9" containerName="keystone-bootstrap" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.125182 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="db765426-53ee-4c41-a313-5ddf7591b6a9" containerName="keystone-bootstrap" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.125437 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="db765426-53ee-4c41-a313-5ddf7591b6a9" containerName="keystone-bootstrap" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.126151 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.128732 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-48tx9" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.129016 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.129097 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.129086 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.129768 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.132505 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cf89p"] Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.205553 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfz8b\" (UniqueName: \"kubernetes.io/projected/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-kube-api-access-tfz8b\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.205730 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-credential-keys\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.205954 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-fernet-keys\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.206193 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-config-data\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.206279 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-combined-ca-bundle\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.206352 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-scripts\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.308627 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-config-data\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.308681 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-combined-ca-bundle\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.308706 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-scripts\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.308747 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfz8b\" (UniqueName: \"kubernetes.io/projected/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-kube-api-access-tfz8b\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.308823 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-credential-keys\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.308880 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-fernet-keys\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.313302 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-fernet-keys\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.328092 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-combined-ca-bundle\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.328690 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-config-data\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.329749 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-scripts\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.332676 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-credential-keys\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.345390 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfz8b\" (UniqueName: \"kubernetes.io/projected/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-kube-api-access-tfz8b\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.448898 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.899800 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cf89p"] Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.963705 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cf89p" event={"ID":"85662cfe-6ca0-41d0-8858-4e63cd77f3c6","Type":"ContainerStarted","Data":"d7fb6b6b44fcf02e8336483e409e263a611f82667614ed7cd8f3db4ee1a7b24e"} Feb 21 08:11:25 crc kubenswrapper[4820]: I0221 08:11:25.705818 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db765426-53ee-4c41-a313-5ddf7591b6a9" path="/var/lib/kubelet/pods/db765426-53ee-4c41-a313-5ddf7591b6a9/volumes" Feb 21 08:11:25 crc kubenswrapper[4820]: I0221 08:11:25.971273 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cf89p" event={"ID":"85662cfe-6ca0-41d0-8858-4e63cd77f3c6","Type":"ContainerStarted","Data":"08029266fdbaec4768281dce6906fb8acc0183782e2aefac3bdb5346ddaafd3d"} Feb 21 08:11:25 crc kubenswrapper[4820]: I0221 08:11:25.991402 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cf89p" podStartSLOduration=1.991381971 podStartE2EDuration="1.991381971s" podCreationTimestamp="2026-02-21 08:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:11:25.988924265 +0000 UTC m=+5061.022008473" watchObservedRunningTime="2026-02-21 08:11:25.991381971 +0000 UTC m=+5061.024466169" Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.403454 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.478336 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c565c565-4g68w"] Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.478591 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59c565c565-4g68w" podUID="623dbf87-d39f-4026-9aa5-72d52508407b" containerName="dnsmasq-dns" containerID="cri-o://af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156" gracePeriod=10 Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.962780 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.981700 4820 generic.go:334] "Generic (PLEG): container finished" podID="623dbf87-d39f-4026-9aa5-72d52508407b" containerID="af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156" exitCode=0 Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.981753 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c565c565-4g68w" event={"ID":"623dbf87-d39f-4026-9aa5-72d52508407b","Type":"ContainerDied","Data":"af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156"} Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.981793 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.981818 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c565c565-4g68w" event={"ID":"623dbf87-d39f-4026-9aa5-72d52508407b","Type":"ContainerDied","Data":"f79446424404530462274952dece2308d0d1ba04fb18b87302d89305eb07556f"} Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.981838 4820 scope.go:117] "RemoveContainer" containerID="af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.012453 4820 scope.go:117] "RemoveContainer" containerID="f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.066823 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-nb\") pod \"623dbf87-d39f-4026-9aa5-72d52508407b\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.066870 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-config\") pod \"623dbf87-d39f-4026-9aa5-72d52508407b\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.066900 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-dns-svc\") pod \"623dbf87-d39f-4026-9aa5-72d52508407b\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.066945 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-296sq\" (UniqueName: \"kubernetes.io/projected/623dbf87-d39f-4026-9aa5-72d52508407b-kube-api-access-296sq\") pod \"623dbf87-d39f-4026-9aa5-72d52508407b\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.067022 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-sb\") pod \"623dbf87-d39f-4026-9aa5-72d52508407b\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.067587 4820 scope.go:117] "RemoveContainer" containerID="af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156" Feb 21 08:11:27 crc kubenswrapper[4820]: E0221 08:11:27.068372 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156\": container with ID starting with af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156 not found: ID does not exist" containerID="af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.068574 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156"} err="failed to get container status \"af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156\": rpc error: code = NotFound desc = could not find container \"af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156\": container with ID starting with af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156 not found: ID does not exist" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.068606 4820 scope.go:117] "RemoveContainer" containerID="f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6" Feb 21 08:11:27 crc kubenswrapper[4820]: E0221 08:11:27.069212 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6\": container with ID starting with f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6 not found: ID does not exist" containerID="f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.069308 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6"} err="failed to get container status \"f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6\": rpc error: code = NotFound desc = could not find container \"f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6\": container with ID starting with f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6 not found: ID does not exist" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.084722 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/623dbf87-d39f-4026-9aa5-72d52508407b-kube-api-access-296sq" (OuterVolumeSpecName: "kube-api-access-296sq") pod "623dbf87-d39f-4026-9aa5-72d52508407b" (UID: "623dbf87-d39f-4026-9aa5-72d52508407b"). InnerVolumeSpecName "kube-api-access-296sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.124905 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "623dbf87-d39f-4026-9aa5-72d52508407b" (UID: "623dbf87-d39f-4026-9aa5-72d52508407b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.130769 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-config" (OuterVolumeSpecName: "config") pod "623dbf87-d39f-4026-9aa5-72d52508407b" (UID: "623dbf87-d39f-4026-9aa5-72d52508407b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.137810 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "623dbf87-d39f-4026-9aa5-72d52508407b" (UID: "623dbf87-d39f-4026-9aa5-72d52508407b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.148496 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "623dbf87-d39f-4026-9aa5-72d52508407b" (UID: "623dbf87-d39f-4026-9aa5-72d52508407b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.169136 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.169170 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.169181 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.169397 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-296sq\" (UniqueName: \"kubernetes.io/projected/623dbf87-d39f-4026-9aa5-72d52508407b-kube-api-access-296sq\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.169548 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.314188 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c565c565-4g68w"] Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.326718 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59c565c565-4g68w"] Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.706824 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="623dbf87-d39f-4026-9aa5-72d52508407b" path="/var/lib/kubelet/pods/623dbf87-d39f-4026-9aa5-72d52508407b/volumes" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.992615 4820 generic.go:334] "Generic (PLEG): container finished" podID="85662cfe-6ca0-41d0-8858-4e63cd77f3c6" containerID="08029266fdbaec4768281dce6906fb8acc0183782e2aefac3bdb5346ddaafd3d" exitCode=0 Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.992658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cf89p" event={"ID":"85662cfe-6ca0-41d0-8858-4e63cd77f3c6","Type":"ContainerDied","Data":"08029266fdbaec4768281dce6906fb8acc0183782e2aefac3bdb5346ddaafd3d"} Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.300713 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.407750 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-combined-ca-bundle\") pod \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.407846 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-credential-keys\") pod \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.407876 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-fernet-keys\") pod \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.407900 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-config-data\") pod \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.407926 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfz8b\" (UniqueName: \"kubernetes.io/projected/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-kube-api-access-tfz8b\") pod \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.408030 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-scripts\") pod \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.429390 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "85662cfe-6ca0-41d0-8858-4e63cd77f3c6" (UID: "85662cfe-6ca0-41d0-8858-4e63cd77f3c6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.429432 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-scripts" (OuterVolumeSpecName: "scripts") pod "85662cfe-6ca0-41d0-8858-4e63cd77f3c6" (UID: "85662cfe-6ca0-41d0-8858-4e63cd77f3c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.429510 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-kube-api-access-tfz8b" (OuterVolumeSpecName: "kube-api-access-tfz8b") pod "85662cfe-6ca0-41d0-8858-4e63cd77f3c6" (UID: "85662cfe-6ca0-41d0-8858-4e63cd77f3c6"). InnerVolumeSpecName "kube-api-access-tfz8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.429664 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "85662cfe-6ca0-41d0-8858-4e63cd77f3c6" (UID: "85662cfe-6ca0-41d0-8858-4e63cd77f3c6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.438601 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-config-data" (OuterVolumeSpecName: "config-data") pod "85662cfe-6ca0-41d0-8858-4e63cd77f3c6" (UID: "85662cfe-6ca0-41d0-8858-4e63cd77f3c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.440092 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85662cfe-6ca0-41d0-8858-4e63cd77f3c6" (UID: "85662cfe-6ca0-41d0-8858-4e63cd77f3c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.510052 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.510107 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfz8b\" (UniqueName: \"kubernetes.io/projected/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-kube-api-access-tfz8b\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.510126 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.510140 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.510153 4820 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.510164 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.012826 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cf89p" event={"ID":"85662cfe-6ca0-41d0-8858-4e63cd77f3c6","Type":"ContainerDied","Data":"d7fb6b6b44fcf02e8336483e409e263a611f82667614ed7cd8f3db4ee1a7b24e"} Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.012879 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7fb6b6b44fcf02e8336483e409e263a611f82667614ed7cd8f3db4ee1a7b24e" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.012881 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.090803 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-fcdf4b996-mcbdr"] Feb 21 08:11:30 crc kubenswrapper[4820]: E0221 08:11:30.091129 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85662cfe-6ca0-41d0-8858-4e63cd77f3c6" containerName="keystone-bootstrap" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.091154 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="85662cfe-6ca0-41d0-8858-4e63cd77f3c6" containerName="keystone-bootstrap" Feb 21 08:11:30 crc kubenswrapper[4820]: E0221 08:11:30.091173 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623dbf87-d39f-4026-9aa5-72d52508407b" containerName="init" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.091183 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="623dbf87-d39f-4026-9aa5-72d52508407b" containerName="init" Feb 21 08:11:30 crc kubenswrapper[4820]: E0221 08:11:30.091200 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623dbf87-d39f-4026-9aa5-72d52508407b" containerName="dnsmasq-dns" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.091206 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="623dbf87-d39f-4026-9aa5-72d52508407b" containerName="dnsmasq-dns" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.091369 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="85662cfe-6ca0-41d0-8858-4e63cd77f3c6" containerName="keystone-bootstrap" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.091380 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="623dbf87-d39f-4026-9aa5-72d52508407b" containerName="dnsmasq-dns" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.091884 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.093736 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.094189 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.095848 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.095904 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-48tx9" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.096036 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.096525 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.111048 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fcdf4b996-mcbdr"] Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.223886 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-fernet-keys\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.224031 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-config-data\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.224077 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-public-tls-certs\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.224105 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgwr9\" (UniqueName: \"kubernetes.io/projected/1f763cab-817e-415e-bb73-4e077fa0c745-kube-api-access-mgwr9\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.224272 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-credential-keys\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.224302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-internal-tls-certs\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.224351 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-combined-ca-bundle\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.224377 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-scripts\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326113 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-fernet-keys\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326173 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-config-data\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326208 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-public-tls-certs\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326230 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgwr9\" (UniqueName: \"kubernetes.io/projected/1f763cab-817e-415e-bb73-4e077fa0c745-kube-api-access-mgwr9\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326361 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-credential-keys\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326381 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-internal-tls-certs\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326416 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-combined-ca-bundle\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326435 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-scripts\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.331486 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-scripts\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.331502 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-fernet-keys\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.331862 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-config-data\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.332050 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-credential-keys\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.332137 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-public-tls-certs\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.332504 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-combined-ca-bundle\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.339026 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-internal-tls-certs\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.354947 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgwr9\" (UniqueName: \"kubernetes.io/projected/1f763cab-817e-415e-bb73-4e077fa0c745-kube-api-access-mgwr9\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.417119 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.901926 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fcdf4b996-mcbdr"] Feb 21 08:11:31 crc kubenswrapper[4820]: I0221 08:11:31.021851 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fcdf4b996-mcbdr" event={"ID":"1f763cab-817e-415e-bb73-4e077fa0c745","Type":"ContainerStarted","Data":"67606be793907b9f7fd5d9fa4cc8d7b8d471061d111dfd0bc0b44463c61f875a"} Feb 21 08:11:32 crc kubenswrapper[4820]: I0221 08:11:32.031258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fcdf4b996-mcbdr" event={"ID":"1f763cab-817e-415e-bb73-4e077fa0c745","Type":"ContainerStarted","Data":"99ea8ebf293e30a49f96437393013670d4715846d581ac21a58c00b4b9225020"} Feb 21 08:11:32 crc kubenswrapper[4820]: I0221 08:11:32.031574 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:32 crc kubenswrapper[4820]: I0221 08:11:32.054209 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-fcdf4b996-mcbdr" podStartSLOduration=2.054187972 podStartE2EDuration="2.054187972s" podCreationTimestamp="2026-02-21 08:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:11:32.047559602 +0000 UTC m=+5067.080643800" watchObservedRunningTime="2026-02-21 08:11:32.054187972 +0000 UTC m=+5067.087272170" Feb 21 08:11:43 crc kubenswrapper[4820]: I0221 08:11:43.816595 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:11:43 crc kubenswrapper[4820]: I0221 08:11:43.817097 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:11:43 crc kubenswrapper[4820]: I0221 08:11:43.817137 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:11:43 crc kubenswrapper[4820]: I0221 08:11:43.817818 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b576e514e31a08e28f68fa4c688b72455bc5c0da6c05b78101822bef0984897"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:11:43 crc kubenswrapper[4820]: I0221 08:11:43.817875 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://8b576e514e31a08e28f68fa4c688b72455bc5c0da6c05b78101822bef0984897" gracePeriod=600 Feb 21 08:11:45 crc kubenswrapper[4820]: I0221 08:11:45.135841 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="8b576e514e31a08e28f68fa4c688b72455bc5c0da6c05b78101822bef0984897" exitCode=0 Feb 21 08:11:45 crc kubenswrapper[4820]: I0221 08:11:45.135909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"8b576e514e31a08e28f68fa4c688b72455bc5c0da6c05b78101822bef0984897"} Feb 21 08:11:45 crc kubenswrapper[4820]: I0221 08:11:45.136442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a"} Feb 21 08:11:45 crc kubenswrapper[4820]: I0221 08:11:45.136469 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.007613 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.694674 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.695820 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.703045 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.703632 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.703791 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-b64vn" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.703864 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.738335 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 21 08:12:02 crc kubenswrapper[4820]: E0221 08:12:02.738926 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-hgbmb openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[combined-ca-bundle kube-api-access-hgbmb openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="7aaf850e-4879-4971-aff1-b9e669395079" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.745399 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.817835 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.819176 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.831599 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.961826 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config-secret\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.961872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.961896 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.962075 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnds4\" (UniqueName: \"kubernetes.io/projected/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-kube-api-access-nnds4\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.063701 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.064052 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.064137 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnds4\" (UniqueName: \"kubernetes.io/projected/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-kube-api-access-nnds4\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.064274 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config-secret\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.065560 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.069862 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config-secret\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.074086 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.084915 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnds4\" (UniqueName: \"kubernetes.io/projected/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-kube-api-access-nnds4\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.143292 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.298360 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.305274 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7aaf850e-4879-4971-aff1-b9e669395079" podUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.311514 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.557904 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.567763 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.706960 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aaf850e-4879-4971-aff1-b9e669395079" path="/var/lib/kubelet/pods/7aaf850e-4879-4971-aff1-b9e669395079/volumes" Feb 21 08:12:04 crc kubenswrapper[4820]: I0221 08:12:04.307751 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0690f7f6-8a8e-4c10-92b5-31640a2a46b1","Type":"ContainerStarted","Data":"018cc0eb075c25a159f0d4f6d7b7d2a1a4f2eb823a973e7c639603f562270ccf"} Feb 21 08:12:04 crc kubenswrapper[4820]: I0221 08:12:04.307769 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:12:04 crc kubenswrapper[4820]: I0221 08:12:04.315603 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7aaf850e-4879-4971-aff1-b9e669395079" podUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" Feb 21 08:12:15 crc kubenswrapper[4820]: I0221 08:12:15.390972 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0690f7f6-8a8e-4c10-92b5-31640a2a46b1","Type":"ContainerStarted","Data":"d2a5a3b2cd722605c77544d2b55b04c162a515d379ad4f861603c967fcd87469"} Feb 21 08:12:15 crc kubenswrapper[4820]: I0221 08:12:15.412697 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.228803462 podStartE2EDuration="13.412677521s" podCreationTimestamp="2026-02-21 08:12:02 +0000 UTC" firstStartedPulling="2026-02-21 08:12:03.567490319 +0000 UTC m=+5098.600574517" lastFinishedPulling="2026-02-21 08:12:14.751364358 +0000 UTC m=+5109.784448576" observedRunningTime="2026-02-21 08:12:15.40558937 +0000 UTC m=+5110.438673558" watchObservedRunningTime="2026-02-21 08:12:15.412677521 +0000 UTC m=+5110.445761719" Feb 21 08:13:41 crc kubenswrapper[4820]: I0221 08:13:41.972467 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5a31-account-create-update-p74qt"] Feb 21 08:13:41 crc kubenswrapper[4820]: I0221 08:13:41.973949 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:41 crc kubenswrapper[4820]: I0221 08:13:41.976000 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 21 08:13:41 crc kubenswrapper[4820]: I0221 08:13:41.978363 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-w88hx"] Feb 21 08:13:41 crc kubenswrapper[4820]: I0221 08:13:41.979415 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:41 crc kubenswrapper[4820]: I0221 08:13:41.988028 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5a31-account-create-update-p74qt"] Feb 21 08:13:41 crc kubenswrapper[4820]: I0221 08:13:41.998467 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w88hx"] Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.040538 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-operator-scripts\") pod \"barbican-5a31-account-create-update-p74qt\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.040637 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fea2a27-a57a-4827-8e17-5d19ef7bba28-operator-scripts\") pod \"barbican-db-create-w88hx\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.040704 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksws5\" (UniqueName: \"kubernetes.io/projected/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-kube-api-access-ksws5\") pod \"barbican-5a31-account-create-update-p74qt\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.040747 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw2c4\" (UniqueName: \"kubernetes.io/projected/0fea2a27-a57a-4827-8e17-5d19ef7bba28-kube-api-access-qw2c4\") pod \"barbican-db-create-w88hx\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.142200 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksws5\" (UniqueName: \"kubernetes.io/projected/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-kube-api-access-ksws5\") pod \"barbican-5a31-account-create-update-p74qt\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.142321 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw2c4\" (UniqueName: \"kubernetes.io/projected/0fea2a27-a57a-4827-8e17-5d19ef7bba28-kube-api-access-qw2c4\") pod \"barbican-db-create-w88hx\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.142456 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-operator-scripts\") pod \"barbican-5a31-account-create-update-p74qt\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.142527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fea2a27-a57a-4827-8e17-5d19ef7bba28-operator-scripts\") pod \"barbican-db-create-w88hx\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.143169 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-operator-scripts\") pod \"barbican-5a31-account-create-update-p74qt\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.143812 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fea2a27-a57a-4827-8e17-5d19ef7bba28-operator-scripts\") pod \"barbican-db-create-w88hx\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.162674 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw2c4\" (UniqueName: \"kubernetes.io/projected/0fea2a27-a57a-4827-8e17-5d19ef7bba28-kube-api-access-qw2c4\") pod \"barbican-db-create-w88hx\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.163308 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksws5\" (UniqueName: \"kubernetes.io/projected/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-kube-api-access-ksws5\") pod \"barbican-5a31-account-create-update-p74qt\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.306962 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.313335 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.736922 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w88hx"] Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.799653 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5a31-account-create-update-p74qt"] Feb 21 08:13:42 crc kubenswrapper[4820]: W0221 08:13:42.807905 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4a61ba7_b697_4b33_8ed3_9dda50a2c415.slice/crio-cd70a967ebca8c1b6239d552afa7e33f8a6e3b4e70a41da7f691365abda34559 WatchSource:0}: Error finding container cd70a967ebca8c1b6239d552afa7e33f8a6e3b4e70a41da7f691365abda34559: Status 404 returned error can't find the container with id cd70a967ebca8c1b6239d552afa7e33f8a6e3b4e70a41da7f691365abda34559 Feb 21 08:13:43 crc kubenswrapper[4820]: I0221 08:13:43.054879 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a31-account-create-update-p74qt" event={"ID":"d4a61ba7-b697-4b33-8ed3-9dda50a2c415","Type":"ContainerStarted","Data":"135e969cc483fae03c701729ed4ef0eebb1f47660c935ededa411b6c1ad4f1b4"} Feb 21 08:13:43 crc kubenswrapper[4820]: I0221 08:13:43.055497 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a31-account-create-update-p74qt" event={"ID":"d4a61ba7-b697-4b33-8ed3-9dda50a2c415","Type":"ContainerStarted","Data":"cd70a967ebca8c1b6239d552afa7e33f8a6e3b4e70a41da7f691365abda34559"} Feb 21 08:13:43 crc kubenswrapper[4820]: I0221 08:13:43.058516 4820 generic.go:334] "Generic (PLEG): container finished" podID="0fea2a27-a57a-4827-8e17-5d19ef7bba28" containerID="501babb59c40b46545eba4aa654f940bb7c87c7e466ae9ff90824f2b1d71dea7" exitCode=0 Feb 21 08:13:43 crc kubenswrapper[4820]: I0221 08:13:43.058575 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w88hx" event={"ID":"0fea2a27-a57a-4827-8e17-5d19ef7bba28","Type":"ContainerDied","Data":"501babb59c40b46545eba4aa654f940bb7c87c7e466ae9ff90824f2b1d71dea7"} Feb 21 08:13:43 crc kubenswrapper[4820]: I0221 08:13:43.058611 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w88hx" event={"ID":"0fea2a27-a57a-4827-8e17-5d19ef7bba28","Type":"ContainerStarted","Data":"58604e5039d452a5e4dca0e77afd3357e776cfdfc271080e2bd76a283335c794"} Feb 21 08:13:43 crc kubenswrapper[4820]: I0221 08:13:43.069892 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-5a31-account-create-update-p74qt" podStartSLOduration=2.06986628 podStartE2EDuration="2.06986628s" podCreationTimestamp="2026-02-21 08:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:13:43.066694963 +0000 UTC m=+5198.099779161" watchObservedRunningTime="2026-02-21 08:13:43.06986628 +0000 UTC m=+5198.102950478" Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.066626 4820 generic.go:334] "Generic (PLEG): container finished" podID="d4a61ba7-b697-4b33-8ed3-9dda50a2c415" containerID="135e969cc483fae03c701729ed4ef0eebb1f47660c935ededa411b6c1ad4f1b4" exitCode=0 Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.066686 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a31-account-create-update-p74qt" event={"ID":"d4a61ba7-b697-4b33-8ed3-9dda50a2c415","Type":"ContainerDied","Data":"135e969cc483fae03c701729ed4ef0eebb1f47660c935ededa411b6c1ad4f1b4"} Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.410606 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.582886 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw2c4\" (UniqueName: \"kubernetes.io/projected/0fea2a27-a57a-4827-8e17-5d19ef7bba28-kube-api-access-qw2c4\") pod \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.583040 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fea2a27-a57a-4827-8e17-5d19ef7bba28-operator-scripts\") pod \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.583600 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fea2a27-a57a-4827-8e17-5d19ef7bba28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0fea2a27-a57a-4827-8e17-5d19ef7bba28" (UID: "0fea2a27-a57a-4827-8e17-5d19ef7bba28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.583735 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fea2a27-a57a-4827-8e17-5d19ef7bba28-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.588310 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fea2a27-a57a-4827-8e17-5d19ef7bba28-kube-api-access-qw2c4" (OuterVolumeSpecName: "kube-api-access-qw2c4") pod "0fea2a27-a57a-4827-8e17-5d19ef7bba28" (UID: "0fea2a27-a57a-4827-8e17-5d19ef7bba28"). InnerVolumeSpecName "kube-api-access-qw2c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.685159 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw2c4\" (UniqueName: \"kubernetes.io/projected/0fea2a27-a57a-4827-8e17-5d19ef7bba28-kube-api-access-qw2c4\") on node \"crc\" DevicePath \"\"" Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.076142 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.076146 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w88hx" event={"ID":"0fea2a27-a57a-4827-8e17-5d19ef7bba28","Type":"ContainerDied","Data":"58604e5039d452a5e4dca0e77afd3357e776cfdfc271080e2bd76a283335c794"} Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.076578 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58604e5039d452a5e4dca0e77afd3357e776cfdfc271080e2bd76a283335c794" Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.380035 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.496584 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-operator-scripts\") pod \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.496837 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksws5\" (UniqueName: \"kubernetes.io/projected/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-kube-api-access-ksws5\") pod \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.497136 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4a61ba7-b697-4b33-8ed3-9dda50a2c415" (UID: "d4a61ba7-b697-4b33-8ed3-9dda50a2c415"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.497863 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.500811 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-kube-api-access-ksws5" (OuterVolumeSpecName: "kube-api-access-ksws5") pod "d4a61ba7-b697-4b33-8ed3-9dda50a2c415" (UID: "d4a61ba7-b697-4b33-8ed3-9dda50a2c415"). InnerVolumeSpecName "kube-api-access-ksws5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.599223 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksws5\" (UniqueName: \"kubernetes.io/projected/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-kube-api-access-ksws5\") on node \"crc\" DevicePath \"\"" Feb 21 08:13:46 crc kubenswrapper[4820]: I0221 08:13:46.083637 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a31-account-create-update-p74qt" event={"ID":"d4a61ba7-b697-4b33-8ed3-9dda50a2c415","Type":"ContainerDied","Data":"cd70a967ebca8c1b6239d552afa7e33f8a6e3b4e70a41da7f691365abda34559"} Feb 21 08:13:46 crc kubenswrapper[4820]: I0221 08:13:46.083690 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd70a967ebca8c1b6239d552afa7e33f8a6e3b4e70a41da7f691365abda34559" Feb 21 08:13:46 crc kubenswrapper[4820]: I0221 08:13:46.084342 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.418066 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-kncz4"] Feb 21 08:13:47 crc kubenswrapper[4820]: E0221 08:13:47.419592 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fea2a27-a57a-4827-8e17-5d19ef7bba28" containerName="mariadb-database-create" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.419717 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fea2a27-a57a-4827-8e17-5d19ef7bba28" containerName="mariadb-database-create" Feb 21 08:13:47 crc kubenswrapper[4820]: E0221 08:13:47.419797 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a61ba7-b697-4b33-8ed3-9dda50a2c415" containerName="mariadb-account-create-update" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.419865 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a61ba7-b697-4b33-8ed3-9dda50a2c415" containerName="mariadb-account-create-update" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.420096 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a61ba7-b697-4b33-8ed3-9dda50a2c415" containerName="mariadb-account-create-update" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.420196 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fea2a27-a57a-4827-8e17-5d19ef7bba28" containerName="mariadb-database-create" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.420879 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.424249 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.424383 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9x2ph" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.433668 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kncz4"] Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.441294 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-combined-ca-bundle\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.441416 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-db-sync-config-data\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.441463 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdx4t\" (UniqueName: \"kubernetes.io/projected/2285cbc5-545d-463d-ae4a-350c3fd26323-kube-api-access-xdx4t\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.542640 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-combined-ca-bundle\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.542737 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-db-sync-config-data\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.542776 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdx4t\" (UniqueName: \"kubernetes.io/projected/2285cbc5-545d-463d-ae4a-350c3fd26323-kube-api-access-xdx4t\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.549137 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-combined-ca-bundle\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.553729 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-db-sync-config-data\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.565052 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdx4t\" (UniqueName: \"kubernetes.io/projected/2285cbc5-545d-463d-ae4a-350c3fd26323-kube-api-access-xdx4t\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.748313 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:48 crc kubenswrapper[4820]: I0221 08:13:48.200093 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kncz4"] Feb 21 08:13:48 crc kubenswrapper[4820]: W0221 08:13:48.205639 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2285cbc5_545d_463d_ae4a_350c3fd26323.slice/crio-5ddbc6f0bbb52f94fb64169984b46682325c6a7f7aeb5e2edeaee1a33b82b011 WatchSource:0}: Error finding container 5ddbc6f0bbb52f94fb64169984b46682325c6a7f7aeb5e2edeaee1a33b82b011: Status 404 returned error can't find the container with id 5ddbc6f0bbb52f94fb64169984b46682325c6a7f7aeb5e2edeaee1a33b82b011 Feb 21 08:13:49 crc kubenswrapper[4820]: I0221 08:13:49.109464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kncz4" event={"ID":"2285cbc5-545d-463d-ae4a-350c3fd26323","Type":"ContainerStarted","Data":"5ddbc6f0bbb52f94fb64169984b46682325c6a7f7aeb5e2edeaee1a33b82b011"} Feb 21 08:13:53 crc kubenswrapper[4820]: I0221 08:13:53.146425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kncz4" event={"ID":"2285cbc5-545d-463d-ae4a-350c3fd26323","Type":"ContainerStarted","Data":"2a26fe99fe0c30f653a1d68961945cc0a0de3158933b3e891813aa05adae4ac5"} Feb 21 08:13:53 crc kubenswrapper[4820]: I0221 08:13:53.168941 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-kncz4" podStartSLOduration=1.779823202 podStartE2EDuration="6.168920667s" podCreationTimestamp="2026-02-21 08:13:47 +0000 UTC" firstStartedPulling="2026-02-21 08:13:48.208675379 +0000 UTC m=+5203.241759577" lastFinishedPulling="2026-02-21 08:13:52.597772844 +0000 UTC m=+5207.630857042" observedRunningTime="2026-02-21 08:13:53.166219285 +0000 UTC m=+5208.199303503" watchObservedRunningTime="2026-02-21 08:13:53.168920667 +0000 UTC m=+5208.202004865" Feb 21 08:13:54 crc kubenswrapper[4820]: I0221 08:13:54.155432 4820 generic.go:334] "Generic (PLEG): container finished" podID="2285cbc5-545d-463d-ae4a-350c3fd26323" containerID="2a26fe99fe0c30f653a1d68961945cc0a0de3158933b3e891813aa05adae4ac5" exitCode=0 Feb 21 08:13:54 crc kubenswrapper[4820]: I0221 08:13:54.155467 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kncz4" event={"ID":"2285cbc5-545d-463d-ae4a-350c3fd26323","Type":"ContainerDied","Data":"2a26fe99fe0c30f653a1d68961945cc0a0de3158933b3e891813aa05adae4ac5"} Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.479747 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.603745 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-combined-ca-bundle\") pod \"2285cbc5-545d-463d-ae4a-350c3fd26323\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.603865 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdx4t\" (UniqueName: \"kubernetes.io/projected/2285cbc5-545d-463d-ae4a-350c3fd26323-kube-api-access-xdx4t\") pod \"2285cbc5-545d-463d-ae4a-350c3fd26323\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.606872 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-db-sync-config-data\") pod \"2285cbc5-545d-463d-ae4a-350c3fd26323\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.611132 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2285cbc5-545d-463d-ae4a-350c3fd26323-kube-api-access-xdx4t" (OuterVolumeSpecName: "kube-api-access-xdx4t") pod "2285cbc5-545d-463d-ae4a-350c3fd26323" (UID: "2285cbc5-545d-463d-ae4a-350c3fd26323"). InnerVolumeSpecName "kube-api-access-xdx4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.611656 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2285cbc5-545d-463d-ae4a-350c3fd26323" (UID: "2285cbc5-545d-463d-ae4a-350c3fd26323"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.627343 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2285cbc5-545d-463d-ae4a-350c3fd26323" (UID: "2285cbc5-545d-463d-ae4a-350c3fd26323"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.709332 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdx4t\" (UniqueName: \"kubernetes.io/projected/2285cbc5-545d-463d-ae4a-350c3fd26323-kube-api-access-xdx4t\") on node \"crc\" DevicePath \"\"" Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.709374 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.709387 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.173205 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kncz4" event={"ID":"2285cbc5-545d-463d-ae4a-350c3fd26323","Type":"ContainerDied","Data":"5ddbc6f0bbb52f94fb64169984b46682325c6a7f7aeb5e2edeaee1a33b82b011"} Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.173364 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ddbc6f0bbb52f94fb64169984b46682325c6a7f7aeb5e2edeaee1a33b82b011" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.173426 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.390959 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-769cf6fd65-dfls2"] Feb 21 08:13:56 crc kubenswrapper[4820]: E0221 08:13:56.391393 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2285cbc5-545d-463d-ae4a-350c3fd26323" containerName="barbican-db-sync" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.391411 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2285cbc5-545d-463d-ae4a-350c3fd26323" containerName="barbican-db-sync" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.391584 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2285cbc5-545d-463d-ae4a-350c3fd26323" containerName="barbican-db-sync" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.394406 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.401999 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.403608 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.403911 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9x2ph" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.421380 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-754674bd8d-6lxjs"] Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.421685 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlppc\" (UniqueName: \"kubernetes.io/projected/c1f442bc-072b-483e-8821-3ee262e5aa4e-kube-api-access-zlppc\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.421767 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-combined-ca-bundle\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.421800 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-config-data-custom\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.421828 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1f442bc-072b-483e-8821-3ee262e5aa4e-logs\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.421854 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-config-data\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.423013 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.429218 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.436612 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-769cf6fd65-dfls2"] Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.454397 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-754674bd8d-6lxjs"] Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.491152 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56f68c4f9-lzs2s"] Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.492461 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.509812 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56f68c4f9-lzs2s"] Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523036 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-config\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523082 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-config-data-custom\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523110 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlppc\" (UniqueName: \"kubernetes.io/projected/c1f442bc-072b-483e-8821-3ee262e5aa4e-kube-api-access-zlppc\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523130 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-config-data\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523147 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h68z\" (UniqueName: \"kubernetes.io/projected/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-kube-api-access-7h68z\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523254 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-nb\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523424 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-combined-ca-bundle\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523471 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-config-data-custom\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523494 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1f442bc-072b-483e-8821-3ee262e5aa4e-logs\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523516 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-config-data\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523553 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4skw\" (UniqueName: \"kubernetes.io/projected/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-kube-api-access-r4skw\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523570 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-dns-svc\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523619 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-sb\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523647 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-logs\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523668 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-combined-ca-bundle\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.526572 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1f442bc-072b-483e-8821-3ee262e5aa4e-logs\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.527298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-config-data-custom\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.528070 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-combined-ca-bundle\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.531114 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-config-data\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.544621 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlppc\" (UniqueName: \"kubernetes.io/projected/c1f442bc-072b-483e-8821-3ee262e5aa4e-kube-api-access-zlppc\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.586610 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7686494894-42qqd"] Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.601256 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.606400 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7686494894-42qqd"] Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.607943 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627449 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sp69\" (UniqueName: \"kubernetes.io/projected/ca3b906b-fd95-4d1f-a82f-18663d7cb683-kube-api-access-4sp69\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-logs\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627543 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627574 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-combined-ca-bundle\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627645 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-config\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627673 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-config-data-custom\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627703 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-config-data\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627733 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h68z\" (UniqueName: \"kubernetes.io/projected/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-kube-api-access-7h68z\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627759 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-nb\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627785 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca3b906b-fd95-4d1f-a82f-18663d7cb683-logs\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628172 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data-custom\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628214 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4skw\" (UniqueName: \"kubernetes.io/projected/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-kube-api-access-r4skw\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628252 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-dns-svc\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-combined-ca-bundle\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628344 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-sb\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-config\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628024 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-logs\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628756 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-nb\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628920 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-sb\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.629467 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-dns-svc\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.632285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-config-data-custom\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.634623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-combined-ca-bundle\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.646096 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4skw\" (UniqueName: \"kubernetes.io/projected/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-kube-api-access-r4skw\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.649265 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-config-data\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.649381 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h68z\" (UniqueName: \"kubernetes.io/projected/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-kube-api-access-7h68z\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.718859 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.729516 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca3b906b-fd95-4d1f-a82f-18663d7cb683-logs\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.729622 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data-custom\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.729694 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-combined-ca-bundle\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.729731 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sp69\" (UniqueName: \"kubernetes.io/projected/ca3b906b-fd95-4d1f-a82f-18663d7cb683-kube-api-access-4sp69\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.729752 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.730179 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca3b906b-fd95-4d1f-a82f-18663d7cb683-logs\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.734365 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data-custom\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.735435 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-combined-ca-bundle\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.739548 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.746195 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.752767 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sp69\" (UniqueName: \"kubernetes.io/projected/ca3b906b-fd95-4d1f-a82f-18663d7cb683-kube-api-access-4sp69\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.812835 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.916691 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:57 crc kubenswrapper[4820]: I0221 08:13:57.238966 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-769cf6fd65-dfls2"] Feb 21 08:13:57 crc kubenswrapper[4820]: W0221 08:13:57.241850 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f442bc_072b_483e_8821_3ee262e5aa4e.slice/crio-362a51eb612a6a1acca6f31c98500cd7772f9ff5d9e13abfd167c35779427ce5 WatchSource:0}: Error finding container 362a51eb612a6a1acca6f31c98500cd7772f9ff5d9e13abfd167c35779427ce5: Status 404 returned error can't find the container with id 362a51eb612a6a1acca6f31c98500cd7772f9ff5d9e13abfd167c35779427ce5 Feb 21 08:13:57 crc kubenswrapper[4820]: I0221 08:13:57.320496 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-754674bd8d-6lxjs"] Feb 21 08:13:57 crc kubenswrapper[4820]: I0221 08:13:57.413951 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56f68c4f9-lzs2s"] Feb 21 08:13:57 crc kubenswrapper[4820]: I0221 08:13:57.501100 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7686494894-42qqd"] Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.192434 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" event={"ID":"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4","Type":"ContainerStarted","Data":"ee3d488b58002a926a798c1f2416707be14f762c34d6d00a88c3289b17cd8b50"} Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.195195 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-769cf6fd65-dfls2" event={"ID":"c1f442bc-072b-483e-8821-3ee262e5aa4e","Type":"ContainerStarted","Data":"362a51eb612a6a1acca6f31c98500cd7772f9ff5d9e13abfd167c35779427ce5"} Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.196821 4820 generic.go:334] "Generic (PLEG): container finished" podID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerID="fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260" exitCode=0 Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.196881 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" event={"ID":"f5b4d95c-af87-417e-a56b-20cb7a43c2e7","Type":"ContainerDied","Data":"fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260"} Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.196902 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" event={"ID":"f5b4d95c-af87-417e-a56b-20cb7a43c2e7","Type":"ContainerStarted","Data":"e4f69cac7c8ea8139b87e81abdba2e547b7e3f99598e2c84fa49315dcdd98eeb"} Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.199303 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7686494894-42qqd" event={"ID":"ca3b906b-fd95-4d1f-a82f-18663d7cb683","Type":"ContainerStarted","Data":"bdd13cb8dd27e6491e6118d0d26b3e20fbbf9ce4646a106c500112e253d46472"} Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.199331 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7686494894-42qqd" event={"ID":"ca3b906b-fd95-4d1f-a82f-18663d7cb683","Type":"ContainerStarted","Data":"8c0fb447700e63fa48262f2548cda06bf12aed24885e176faa0195a336f5334d"} Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.199340 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7686494894-42qqd" event={"ID":"ca3b906b-fd95-4d1f-a82f-18663d7cb683","Type":"ContainerStarted","Data":"6ecb19021e2d9bc235c4223e0bff1c84d022aa0f268cb6537621cb5e3479a838"} Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.199695 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.248115 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7686494894-42qqd" podStartSLOduration=2.2480966430000002 podStartE2EDuration="2.248096643s" podCreationTimestamp="2026-02-21 08:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:13:58.24096744 +0000 UTC m=+5213.274051638" watchObservedRunningTime="2026-02-21 08:13:58.248096643 +0000 UTC m=+5213.281180841" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.691586 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5cf69c945b-fsc4w"] Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.693354 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.695633 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.696971 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.701546 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cf69c945b-fsc4w"] Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.766581 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-logs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.766982 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-internal-tls-certs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.767071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8d7g\" (UniqueName: \"kubernetes.io/projected/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-kube-api-access-c8d7g\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.767325 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-config-data-custom\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.767355 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-config-data\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.767407 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-combined-ca-bundle\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.767551 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-public-tls-certs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.877676 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-logs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.877741 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-internal-tls-certs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.877777 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8d7g\" (UniqueName: \"kubernetes.io/projected/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-kube-api-access-c8d7g\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.877815 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-config-data-custom\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.877842 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-config-data\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.877884 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-combined-ca-bundle\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.877961 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-public-tls-certs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.878168 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-logs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.882154 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-config-data-custom\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.882930 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-public-tls-certs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.884932 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-combined-ca-bundle\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.886318 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-internal-tls-certs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.886688 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-config-data\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.894484 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8d7g\" (UniqueName: \"kubernetes.io/projected/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-kube-api-access-c8d7g\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.016045 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.212925 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-769cf6fd65-dfls2" event={"ID":"c1f442bc-072b-483e-8821-3ee262e5aa4e","Type":"ContainerStarted","Data":"598c72d84b9155d180086f27a39db53f86b07307551e8ddb993a05f723d49f9b"} Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.214442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" event={"ID":"f5b4d95c-af87-417e-a56b-20cb7a43c2e7","Type":"ContainerStarted","Data":"99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877"} Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.214727 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.219756 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" event={"ID":"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4","Type":"ContainerStarted","Data":"5c2f5a88f4d426683efadcfb835fa8b78a87c229c33ee36cbecc458d1672c7ed"} Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.219821 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.233055 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" podStartSLOduration=3.233033312 podStartE2EDuration="3.233033312s" podCreationTimestamp="2026-02-21 08:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:13:59.231974914 +0000 UTC m=+5214.265059112" watchObservedRunningTime="2026-02-21 08:13:59.233033312 +0000 UTC m=+5214.266117520" Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.506401 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cf69c945b-fsc4w"] Feb 21 08:13:59 crc kubenswrapper[4820]: W0221 08:13:59.511449 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08d7d55d_2b0b_40fe_9b1c_5930358bebe8.slice/crio-eb7f0bf22917315494c3dd64c413e0dfcffcd24f540086024f42c6f5399701bd WatchSource:0}: Error finding container eb7f0bf22917315494c3dd64c413e0dfcffcd24f540086024f42c6f5399701bd: Status 404 returned error can't find the container with id eb7f0bf22917315494c3dd64c413e0dfcffcd24f540086024f42c6f5399701bd Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.229688 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cf69c945b-fsc4w" event={"ID":"08d7d55d-2b0b-40fe-9b1c-5930358bebe8","Type":"ContainerStarted","Data":"ff78b6c637a07d36357c56a079878ffbfcd326aced08254ac34aaf5ac3ab147b"} Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.229984 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cf69c945b-fsc4w" event={"ID":"08d7d55d-2b0b-40fe-9b1c-5930358bebe8","Type":"ContainerStarted","Data":"16d3b75b707e1e3c9c054954ee90b4f680a6ea059f9363a937a01d7323a65c8c"} Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.230004 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cf69c945b-fsc4w" event={"ID":"08d7d55d-2b0b-40fe-9b1c-5930358bebe8","Type":"ContainerStarted","Data":"eb7f0bf22917315494c3dd64c413e0dfcffcd24f540086024f42c6f5399701bd"} Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.230067 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.230091 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.232378 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-769cf6fd65-dfls2" event={"ID":"c1f442bc-072b-483e-8821-3ee262e5aa4e","Type":"ContainerStarted","Data":"cbc82c1d949b4157d44143d8c5e4e63b85f508822330886cace3ee128a860431"} Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.234375 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" event={"ID":"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4","Type":"ContainerStarted","Data":"e0f7a87ef776a8b4a6b4f14148f27d1ad9348bb917c7801dacde9d3c2da2572b"} Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.254474 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5cf69c945b-fsc4w" podStartSLOduration=2.254452529 podStartE2EDuration="2.254452529s" podCreationTimestamp="2026-02-21 08:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:14:00.24822155 +0000 UTC m=+5215.281305748" watchObservedRunningTime="2026-02-21 08:14:00.254452529 +0000 UTC m=+5215.287536727" Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.271457 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" podStartSLOduration=2.7445653759999997 podStartE2EDuration="4.271444648s" podCreationTimestamp="2026-02-21 08:13:56 +0000 UTC" firstStartedPulling="2026-02-21 08:13:57.32361419 +0000 UTC m=+5212.356698388" lastFinishedPulling="2026-02-21 08:13:58.850493462 +0000 UTC m=+5213.883577660" observedRunningTime="2026-02-21 08:14:00.268483748 +0000 UTC m=+5215.301567946" watchObservedRunningTime="2026-02-21 08:14:00.271444648 +0000 UTC m=+5215.304528846" Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.290694 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-769cf6fd65-dfls2" podStartSLOduration=2.702040066 podStartE2EDuration="4.290670649s" podCreationTimestamp="2026-02-21 08:13:56 +0000 UTC" firstStartedPulling="2026-02-21 08:13:57.244341735 +0000 UTC m=+5212.277425933" lastFinishedPulling="2026-02-21 08:13:58.832972318 +0000 UTC m=+5213.866056516" observedRunningTime="2026-02-21 08:14:00.281401967 +0000 UTC m=+5215.314486175" watchObservedRunningTime="2026-02-21 08:14:00.290670649 +0000 UTC m=+5215.323754847" Feb 21 08:14:05 crc kubenswrapper[4820]: I0221 08:14:05.517793 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:14:05 crc kubenswrapper[4820]: I0221 08:14:05.534741 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:14:05 crc kubenswrapper[4820]: I0221 08:14:05.620804 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7686494894-42qqd"] Feb 21 08:14:05 crc kubenswrapper[4820]: I0221 08:14:05.621140 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7686494894-42qqd" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api-log" containerID="cri-o://8c0fb447700e63fa48262f2548cda06bf12aed24885e176faa0195a336f5334d" gracePeriod=30 Feb 21 08:14:05 crc kubenswrapper[4820]: I0221 08:14:05.621321 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7686494894-42qqd" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api" containerID="cri-o://bdd13cb8dd27e6491e6118d0d26b3e20fbbf9ce4646a106c500112e253d46472" gracePeriod=30 Feb 21 08:14:05 crc kubenswrapper[4820]: I0221 08:14:05.631424 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7686494894-42qqd" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.34:9311/healthcheck\": EOF" Feb 21 08:14:05 crc kubenswrapper[4820]: I0221 08:14:05.631506 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7686494894-42qqd" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.34:9311/healthcheck\": EOF" Feb 21 08:14:06 crc kubenswrapper[4820]: I0221 08:14:06.285951 4820 generic.go:334] "Generic (PLEG): container finished" podID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerID="8c0fb447700e63fa48262f2548cda06bf12aed24885e176faa0195a336f5334d" exitCode=143 Feb 21 08:14:06 crc kubenswrapper[4820]: I0221 08:14:06.286022 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7686494894-42qqd" event={"ID":"ca3b906b-fd95-4d1f-a82f-18663d7cb683","Type":"ContainerDied","Data":"8c0fb447700e63fa48262f2548cda06bf12aed24885e176faa0195a336f5334d"} Feb 21 08:14:06 crc kubenswrapper[4820]: I0221 08:14:06.815009 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:14:06 crc kubenswrapper[4820]: I0221 08:14:06.903059 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6776586657-khcd6"] Feb 21 08:14:06 crc kubenswrapper[4820]: I0221 08:14:06.903364 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6776586657-khcd6" podUID="805ecde9-528b-45f4-a438-42c7799bab7b" containerName="dnsmasq-dns" containerID="cri-o://8f215298561c4a58c13338e8e9d0bb05dbf28f207c7aca70826053c4615fb983" gracePeriod=10 Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.318521 4820 generic.go:334] "Generic (PLEG): container finished" podID="805ecde9-528b-45f4-a438-42c7799bab7b" containerID="8f215298561c4a58c13338e8e9d0bb05dbf28f207c7aca70826053c4615fb983" exitCode=0 Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.318815 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6776586657-khcd6" event={"ID":"805ecde9-528b-45f4-a438-42c7799bab7b","Type":"ContainerDied","Data":"8f215298561c4a58c13338e8e9d0bb05dbf28f207c7aca70826053c4615fb983"} Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.431367 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.521506 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpt7k\" (UniqueName: \"kubernetes.io/projected/805ecde9-528b-45f4-a438-42c7799bab7b-kube-api-access-hpt7k\") pod \"805ecde9-528b-45f4-a438-42c7799bab7b\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.521558 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-dns-svc\") pod \"805ecde9-528b-45f4-a438-42c7799bab7b\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.521643 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-nb\") pod \"805ecde9-528b-45f4-a438-42c7799bab7b\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.521758 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-config\") pod \"805ecde9-528b-45f4-a438-42c7799bab7b\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.521785 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-sb\") pod \"805ecde9-528b-45f4-a438-42c7799bab7b\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.526394 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/805ecde9-528b-45f4-a438-42c7799bab7b-kube-api-access-hpt7k" (OuterVolumeSpecName: "kube-api-access-hpt7k") pod "805ecde9-528b-45f4-a438-42c7799bab7b" (UID: "805ecde9-528b-45f4-a438-42c7799bab7b"). InnerVolumeSpecName "kube-api-access-hpt7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.563763 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "805ecde9-528b-45f4-a438-42c7799bab7b" (UID: "805ecde9-528b-45f4-a438-42c7799bab7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.566390 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-config" (OuterVolumeSpecName: "config") pod "805ecde9-528b-45f4-a438-42c7799bab7b" (UID: "805ecde9-528b-45f4-a438-42c7799bab7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.576730 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "805ecde9-528b-45f4-a438-42c7799bab7b" (UID: "805ecde9-528b-45f4-a438-42c7799bab7b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.592740 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "805ecde9-528b-45f4-a438-42c7799bab7b" (UID: "805ecde9-528b-45f4-a438-42c7799bab7b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.623222 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.623267 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.623277 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpt7k\" (UniqueName: \"kubernetes.io/projected/805ecde9-528b-45f4-a438-42c7799bab7b-kube-api-access-hpt7k\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.623289 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.623297 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:08 crc kubenswrapper[4820]: I0221 08:14:08.337710 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6776586657-khcd6" event={"ID":"805ecde9-528b-45f4-a438-42c7799bab7b","Type":"ContainerDied","Data":"db76e306f3b16314f5537d5d9c142291f2db91510eba8dc788421390eb44ddd6"} Feb 21 08:14:08 crc kubenswrapper[4820]: I0221 08:14:08.337777 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:14:08 crc kubenswrapper[4820]: I0221 08:14:08.338070 4820 scope.go:117] "RemoveContainer" containerID="8f215298561c4a58c13338e8e9d0bb05dbf28f207c7aca70826053c4615fb983" Feb 21 08:14:08 crc kubenswrapper[4820]: I0221 08:14:08.364879 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6776586657-khcd6"] Feb 21 08:14:08 crc kubenswrapper[4820]: I0221 08:14:08.369727 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6776586657-khcd6"] Feb 21 08:14:08 crc kubenswrapper[4820]: I0221 08:14:08.379375 4820 scope.go:117] "RemoveContainer" containerID="057a9fc88ae8a1df7b41fb4caaf76a1bf24268155aeabdcb9c5614189e8f2e4c" Feb 21 08:14:09 crc kubenswrapper[4820]: I0221 08:14:09.706576 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="805ecde9-528b-45f4-a438-42c7799bab7b" path="/var/lib/kubelet/pods/805ecde9-528b-45f4-a438-42c7799bab7b/volumes" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.023273 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7686494894-42qqd" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.34:9311/healthcheck\": read tcp 10.217.0.2:34896->10.217.1.34:9311: read: connection reset by peer" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.023375 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7686494894-42qqd" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.34:9311/healthcheck\": read tcp 10.217.0.2:34908->10.217.1.34:9311: read: connection reset by peer" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.378811 4820 generic.go:334] "Generic (PLEG): container finished" podID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerID="bdd13cb8dd27e6491e6118d0d26b3e20fbbf9ce4646a106c500112e253d46472" exitCode=0 Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.378869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7686494894-42qqd" event={"ID":"ca3b906b-fd95-4d1f-a82f-18663d7cb683","Type":"ContainerDied","Data":"bdd13cb8dd27e6491e6118d0d26b3e20fbbf9ce4646a106c500112e253d46472"} Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.379171 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7686494894-42qqd" event={"ID":"ca3b906b-fd95-4d1f-a82f-18663d7cb683","Type":"ContainerDied","Data":"6ecb19021e2d9bc235c4223e0bff1c84d022aa0f268cb6537621cb5e3479a838"} Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.379186 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ecb19021e2d9bc235c4223e0bff1c84d022aa0f268cb6537621cb5e3479a838" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.389468 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.487161 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-combined-ca-bundle\") pod \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.487309 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sp69\" (UniqueName: \"kubernetes.io/projected/ca3b906b-fd95-4d1f-a82f-18663d7cb683-kube-api-access-4sp69\") pod \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.487382 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca3b906b-fd95-4d1f-a82f-18663d7cb683-logs\") pod \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.487421 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data\") pod \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.487445 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data-custom\") pod \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.488252 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca3b906b-fd95-4d1f-a82f-18663d7cb683-logs" (OuterVolumeSpecName: "logs") pod "ca3b906b-fd95-4d1f-a82f-18663d7cb683" (UID: "ca3b906b-fd95-4d1f-a82f-18663d7cb683"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.492448 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca3b906b-fd95-4d1f-a82f-18663d7cb683-kube-api-access-4sp69" (OuterVolumeSpecName: "kube-api-access-4sp69") pod "ca3b906b-fd95-4d1f-a82f-18663d7cb683" (UID: "ca3b906b-fd95-4d1f-a82f-18663d7cb683"). InnerVolumeSpecName "kube-api-access-4sp69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.492639 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ca3b906b-fd95-4d1f-a82f-18663d7cb683" (UID: "ca3b906b-fd95-4d1f-a82f-18663d7cb683"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.510336 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca3b906b-fd95-4d1f-a82f-18663d7cb683" (UID: "ca3b906b-fd95-4d1f-a82f-18663d7cb683"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.535538 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data" (OuterVolumeSpecName: "config-data") pod "ca3b906b-fd95-4d1f-a82f-18663d7cb683" (UID: "ca3b906b-fd95-4d1f-a82f-18663d7cb683"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.588900 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca3b906b-fd95-4d1f-a82f-18663d7cb683-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.588938 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.588951 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.588966 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.588977 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sp69\" (UniqueName: \"kubernetes.io/projected/ca3b906b-fd95-4d1f-a82f-18663d7cb683-kube-api-access-4sp69\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:12 crc kubenswrapper[4820]: I0221 08:14:12.386350 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:14:12 crc kubenswrapper[4820]: I0221 08:14:12.413288 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7686494894-42qqd"] Feb 21 08:14:12 crc kubenswrapper[4820]: I0221 08:14:12.420088 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7686494894-42qqd"] Feb 21 08:14:13 crc kubenswrapper[4820]: I0221 08:14:13.705943 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" path="/var/lib/kubelet/pods/ca3b906b-fd95-4d1f-a82f-18663d7cb683/volumes" Feb 21 08:14:13 crc kubenswrapper[4820]: I0221 08:14:13.816148 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:14:13 crc kubenswrapper[4820]: I0221 08:14:13.816218 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:14:43 crc kubenswrapper[4820]: I0221 08:14:43.816914 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:14:43 crc kubenswrapper[4820]: I0221 08:14:43.817508 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.025943 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jzbnq"] Feb 21 08:14:45 crc kubenswrapper[4820]: E0221 08:14:45.026953 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.026977 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api" Feb 21 08:14:45 crc kubenswrapper[4820]: E0221 08:14:45.027001 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api-log" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.027007 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api-log" Feb 21 08:14:45 crc kubenswrapper[4820]: E0221 08:14:45.027027 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805ecde9-528b-45f4-a438-42c7799bab7b" containerName="init" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.027034 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="805ecde9-528b-45f4-a438-42c7799bab7b" containerName="init" Feb 21 08:14:45 crc kubenswrapper[4820]: E0221 08:14:45.027055 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805ecde9-528b-45f4-a438-42c7799bab7b" containerName="dnsmasq-dns" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.027061 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="805ecde9-528b-45f4-a438-42c7799bab7b" containerName="dnsmasq-dns" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.027458 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api-log" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.027492 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="805ecde9-528b-45f4-a438-42c7799bab7b" containerName="dnsmasq-dns" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.027503 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.028659 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.037363 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jzbnq"] Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.108501 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3bef-account-create-update-7n4bl"] Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.109565 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.116504 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.121178 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3bef-account-create-update-7n4bl"] Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.155015 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh46b\" (UniqueName: \"kubernetes.io/projected/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-kube-api-access-wh46b\") pod \"neutron-db-create-jzbnq\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.155089 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-operator-scripts\") pod \"neutron-db-create-jzbnq\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.256834 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh46b\" (UniqueName: \"kubernetes.io/projected/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-kube-api-access-wh46b\") pod \"neutron-db-create-jzbnq\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.256894 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7csdr\" (UniqueName: \"kubernetes.io/projected/80901dca-016d-4c52-b87d-f953b0689f1a-kube-api-access-7csdr\") pod \"neutron-3bef-account-create-update-7n4bl\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.256946 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-operator-scripts\") pod \"neutron-db-create-jzbnq\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.257065 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80901dca-016d-4c52-b87d-f953b0689f1a-operator-scripts\") pod \"neutron-3bef-account-create-update-7n4bl\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.259307 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-operator-scripts\") pod \"neutron-db-create-jzbnq\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.275042 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh46b\" (UniqueName: \"kubernetes.io/projected/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-kube-api-access-wh46b\") pod \"neutron-db-create-jzbnq\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.351645 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.358379 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7csdr\" (UniqueName: \"kubernetes.io/projected/80901dca-016d-4c52-b87d-f953b0689f1a-kube-api-access-7csdr\") pod \"neutron-3bef-account-create-update-7n4bl\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.358458 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80901dca-016d-4c52-b87d-f953b0689f1a-operator-scripts\") pod \"neutron-3bef-account-create-update-7n4bl\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.359168 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80901dca-016d-4c52-b87d-f953b0689f1a-operator-scripts\") pod \"neutron-3bef-account-create-update-7n4bl\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.376839 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7csdr\" (UniqueName: \"kubernetes.io/projected/80901dca-016d-4c52-b87d-f953b0689f1a-kube-api-access-7csdr\") pod \"neutron-3bef-account-create-update-7n4bl\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.438394 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.778182 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jzbnq"] Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.911248 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3bef-account-create-update-7n4bl"] Feb 21 08:14:45 crc kubenswrapper[4820]: W0221 08:14:45.919743 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80901dca_016d_4c52_b87d_f953b0689f1a.slice/crio-d2476c6970ed31b87ae06c75ff5d3d790809e5e9d2953f253aa58ef3bf2d6652 WatchSource:0}: Error finding container d2476c6970ed31b87ae06c75ff5d3d790809e5e9d2953f253aa58ef3bf2d6652: Status 404 returned error can't find the container with id d2476c6970ed31b87ae06c75ff5d3d790809e5e9d2953f253aa58ef3bf2d6652 Feb 21 08:14:46 crc kubenswrapper[4820]: I0221 08:14:46.638902 4820 generic.go:334] "Generic (PLEG): container finished" podID="4e0e7c5f-32ab-470c-a8eb-b0067af1ce22" containerID="2d67b7bb0de25794d2af04a8fdecff08fd5cb66963010072ec396cf1f0a89430" exitCode=0 Feb 21 08:14:46 crc kubenswrapper[4820]: I0221 08:14:46.638974 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jzbnq" event={"ID":"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22","Type":"ContainerDied","Data":"2d67b7bb0de25794d2af04a8fdecff08fd5cb66963010072ec396cf1f0a89430"} Feb 21 08:14:46 crc kubenswrapper[4820]: I0221 08:14:46.639010 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jzbnq" event={"ID":"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22","Type":"ContainerStarted","Data":"58f2984dd91efe48a6f8437de3afb08db1f631136177db0245a991aa6d5c950a"} Feb 21 08:14:46 crc kubenswrapper[4820]: I0221 08:14:46.640218 4820 generic.go:334] "Generic (PLEG): container finished" podID="80901dca-016d-4c52-b87d-f953b0689f1a" containerID="e88ec1f0511faea63b1b890af60d3ecbf225488e293807f27ac476bd20e4d2af" exitCode=0 Feb 21 08:14:46 crc kubenswrapper[4820]: I0221 08:14:46.640268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3bef-account-create-update-7n4bl" event={"ID":"80901dca-016d-4c52-b87d-f953b0689f1a","Type":"ContainerDied","Data":"e88ec1f0511faea63b1b890af60d3ecbf225488e293807f27ac476bd20e4d2af"} Feb 21 08:14:46 crc kubenswrapper[4820]: I0221 08:14:46.640308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3bef-account-create-update-7n4bl" event={"ID":"80901dca-016d-4c52-b87d-f953b0689f1a","Type":"ContainerStarted","Data":"d2476c6970ed31b87ae06c75ff5d3d790809e5e9d2953f253aa58ef3bf2d6652"} Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.015982 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.023098 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.107840 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh46b\" (UniqueName: \"kubernetes.io/projected/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-kube-api-access-wh46b\") pod \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.108111 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7csdr\" (UniqueName: \"kubernetes.io/projected/80901dca-016d-4c52-b87d-f953b0689f1a-kube-api-access-7csdr\") pod \"80901dca-016d-4c52-b87d-f953b0689f1a\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.108150 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-operator-scripts\") pod \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.108165 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80901dca-016d-4c52-b87d-f953b0689f1a-operator-scripts\") pod \"80901dca-016d-4c52-b87d-f953b0689f1a\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.108779 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80901dca-016d-4c52-b87d-f953b0689f1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80901dca-016d-4c52-b87d-f953b0689f1a" (UID: "80901dca-016d-4c52-b87d-f953b0689f1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.108821 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e0e7c5f-32ab-470c-a8eb-b0067af1ce22" (UID: "4e0e7c5f-32ab-470c-a8eb-b0067af1ce22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.109095 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.109115 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80901dca-016d-4c52-b87d-f953b0689f1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.117459 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-kube-api-access-wh46b" (OuterVolumeSpecName: "kube-api-access-wh46b") pod "4e0e7c5f-32ab-470c-a8eb-b0067af1ce22" (UID: "4e0e7c5f-32ab-470c-a8eb-b0067af1ce22"). InnerVolumeSpecName "kube-api-access-wh46b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.117545 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80901dca-016d-4c52-b87d-f953b0689f1a-kube-api-access-7csdr" (OuterVolumeSpecName: "kube-api-access-7csdr") pod "80901dca-016d-4c52-b87d-f953b0689f1a" (UID: "80901dca-016d-4c52-b87d-f953b0689f1a"). InnerVolumeSpecName "kube-api-access-7csdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.211009 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh46b\" (UniqueName: \"kubernetes.io/projected/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-kube-api-access-wh46b\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.211036 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7csdr\" (UniqueName: \"kubernetes.io/projected/80901dca-016d-4c52-b87d-f953b0689f1a-kube-api-access-7csdr\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.656327 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jzbnq" event={"ID":"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22","Type":"ContainerDied","Data":"58f2984dd91efe48a6f8437de3afb08db1f631136177db0245a991aa6d5c950a"} Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.656605 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58f2984dd91efe48a6f8437de3afb08db1f631136177db0245a991aa6d5c950a" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.656410 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.657781 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3bef-account-create-update-7n4bl" event={"ID":"80901dca-016d-4c52-b87d-f953b0689f1a","Type":"ContainerDied","Data":"d2476c6970ed31b87ae06c75ff5d3d790809e5e9d2953f253aa58ef3bf2d6652"} Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.657821 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2476c6970ed31b87ae06c75ff5d3d790809e5e9d2953f253aa58ef3bf2d6652" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.657911 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.309901 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6768b"] Feb 21 08:14:50 crc kubenswrapper[4820]: E0221 08:14:50.310371 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80901dca-016d-4c52-b87d-f953b0689f1a" containerName="mariadb-account-create-update" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.310390 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="80901dca-016d-4c52-b87d-f953b0689f1a" containerName="mariadb-account-create-update" Feb 21 08:14:50 crc kubenswrapper[4820]: E0221 08:14:50.310405 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0e7c5f-32ab-470c-a8eb-b0067af1ce22" containerName="mariadb-database-create" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.310413 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0e7c5f-32ab-470c-a8eb-b0067af1ce22" containerName="mariadb-database-create" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.310617 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0e7c5f-32ab-470c-a8eb-b0067af1ce22" containerName="mariadb-database-create" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.310653 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="80901dca-016d-4c52-b87d-f953b0689f1a" containerName="mariadb-account-create-update" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.311329 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.314393 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.314444 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zbxkp" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.314468 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.320694 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6768b"] Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.459398 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jj76\" (UniqueName: \"kubernetes.io/projected/46c29c61-83db-423e-8e56-52c1637985e2-kube-api-access-6jj76\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.459684 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-combined-ca-bundle\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.459942 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-config\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.561338 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-combined-ca-bundle\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.561481 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-config\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.561542 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jj76\" (UniqueName: \"kubernetes.io/projected/46c29c61-83db-423e-8e56-52c1637985e2-kube-api-access-6jj76\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.567294 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-combined-ca-bundle\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.568687 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-config\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.579088 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jj76\" (UniqueName: \"kubernetes.io/projected/46c29c61-83db-423e-8e56-52c1637985e2-kube-api-access-6jj76\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.632791 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:51 crc kubenswrapper[4820]: I0221 08:14:51.090181 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6768b"] Feb 21 08:14:51 crc kubenswrapper[4820]: I0221 08:14:51.691994 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6768b" event={"ID":"46c29c61-83db-423e-8e56-52c1637985e2","Type":"ContainerStarted","Data":"150cef9ed56fe3eb3dae1713514ca1727eaea3bed5edf04307bc072317b7eac1"} Feb 21 08:14:51 crc kubenswrapper[4820]: I0221 08:14:51.692319 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6768b" event={"ID":"46c29c61-83db-423e-8e56-52c1637985e2","Type":"ContainerStarted","Data":"b3f2799c6d49474e55cf8838e027381e07d2a801685b5b9f577924be80148edf"} Feb 21 08:14:51 crc kubenswrapper[4820]: I0221 08:14:51.712119 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6768b" podStartSLOduration=1.712097805 podStartE2EDuration="1.712097805s" podCreationTimestamp="2026-02-21 08:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:14:51.707837429 +0000 UTC m=+5266.740921627" watchObservedRunningTime="2026-02-21 08:14:51.712097805 +0000 UTC m=+5266.745181993" Feb 21 08:14:55 crc kubenswrapper[4820]: I0221 08:14:55.723423 4820 generic.go:334] "Generic (PLEG): container finished" podID="46c29c61-83db-423e-8e56-52c1637985e2" containerID="150cef9ed56fe3eb3dae1713514ca1727eaea3bed5edf04307bc072317b7eac1" exitCode=0 Feb 21 08:14:55 crc kubenswrapper[4820]: I0221 08:14:55.723512 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6768b" event={"ID":"46c29c61-83db-423e-8e56-52c1637985e2","Type":"ContainerDied","Data":"150cef9ed56fe3eb3dae1713514ca1727eaea3bed5edf04307bc072317b7eac1"} Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.032334 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.181095 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jj76\" (UniqueName: \"kubernetes.io/projected/46c29c61-83db-423e-8e56-52c1637985e2-kube-api-access-6jj76\") pod \"46c29c61-83db-423e-8e56-52c1637985e2\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.182130 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-combined-ca-bundle\") pod \"46c29c61-83db-423e-8e56-52c1637985e2\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.182484 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-config\") pod \"46c29c61-83db-423e-8e56-52c1637985e2\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.187695 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46c29c61-83db-423e-8e56-52c1637985e2-kube-api-access-6jj76" (OuterVolumeSpecName: "kube-api-access-6jj76") pod "46c29c61-83db-423e-8e56-52c1637985e2" (UID: "46c29c61-83db-423e-8e56-52c1637985e2"). InnerVolumeSpecName "kube-api-access-6jj76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.203706 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46c29c61-83db-423e-8e56-52c1637985e2" (UID: "46c29c61-83db-423e-8e56-52c1637985e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.211811 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-config" (OuterVolumeSpecName: "config") pod "46c29c61-83db-423e-8e56-52c1637985e2" (UID: "46c29c61-83db-423e-8e56-52c1637985e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.286676 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.286755 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jj76\" (UniqueName: \"kubernetes.io/projected/46c29c61-83db-423e-8e56-52c1637985e2-kube-api-access-6jj76\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.286768 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.744042 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6768b" event={"ID":"46c29c61-83db-423e-8e56-52c1637985e2","Type":"ContainerDied","Data":"b3f2799c6d49474e55cf8838e027381e07d2a801685b5b9f577924be80148edf"} Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.744354 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3f2799c6d49474e55cf8838e027381e07d2a801685b5b9f577924be80148edf" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.744353 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.861537 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c57cf7787-z47mk"] Feb 21 08:14:57 crc kubenswrapper[4820]: E0221 08:14:57.863744 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c29c61-83db-423e-8e56-52c1637985e2" containerName="neutron-db-sync" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.863893 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c29c61-83db-423e-8e56-52c1637985e2" containerName="neutron-db-sync" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.864204 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="46c29c61-83db-423e-8e56-52c1637985e2" containerName="neutron-db-sync" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.865543 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.884648 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c57cf7787-z47mk"] Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.965955 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6586587ddd-vncjg"] Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.967712 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.974133 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.974149 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6586587ddd-vncjg"] Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.974266 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zbxkp" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.974381 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.974481 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.002751 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-dns-svc\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.002897 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-nb\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.002933 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.002979 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfsf6\" (UniqueName: \"kubernetes.io/projected/4335ce63-5465-40bb-aedb-f31d8c7807fd-kube-api-access-qfsf6\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.003020 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-config\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.104948 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-config\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105290 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-httpd-config\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105316 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-config\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105381 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-dns-svc\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105403 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-ovndb-tls-certs\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105441 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbdqc\" (UniqueName: \"kubernetes.io/projected/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-kube-api-access-sbdqc\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105489 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-nb\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105525 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-combined-ca-bundle\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105557 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfsf6\" (UniqueName: \"kubernetes.io/projected/4335ce63-5465-40bb-aedb-f31d8c7807fd-kube-api-access-qfsf6\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.107573 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-config\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.107883 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-nb\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.108376 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-dns-svc\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.108728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.128211 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfsf6\" (UniqueName: \"kubernetes.io/projected/4335ce63-5465-40bb-aedb-f31d8c7807fd-kube-api-access-qfsf6\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.199975 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.210476 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-combined-ca-bundle\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.210574 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-httpd-config\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.210613 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-config\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.210662 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-ovndb-tls-certs\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.210703 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbdqc\" (UniqueName: \"kubernetes.io/projected/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-kube-api-access-sbdqc\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.216281 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-config\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.216373 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-httpd-config\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.218212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-combined-ca-bundle\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.220718 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-ovndb-tls-certs\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.230468 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbdqc\" (UniqueName: \"kubernetes.io/projected/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-kube-api-access-sbdqc\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.296683 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.730155 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c57cf7787-z47mk"] Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.753468 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" event={"ID":"4335ce63-5465-40bb-aedb-f31d8c7807fd","Type":"ContainerStarted","Data":"aaa04799154f4e72c5b03417ed41306779ddc85795c0bc38fbe0c0a1449205db"} Feb 21 08:14:58 crc kubenswrapper[4820]: W0221 08:14:58.991934 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4cb47d9_b40c_4c9e_bf0c_848e587adc1d.slice/crio-5f47582cecf306ea8d6ece4b7af5c6232216da6b6b11776fed71214288e4fabe WatchSource:0}: Error finding container 5f47582cecf306ea8d6ece4b7af5c6232216da6b6b11776fed71214288e4fabe: Status 404 returned error can't find the container with id 5f47582cecf306ea8d6ece4b7af5c6232216da6b6b11776fed71214288e4fabe Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.997248 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6586587ddd-vncjg"] Feb 21 08:14:59 crc kubenswrapper[4820]: I0221 08:14:59.766095 4820 generic.go:334] "Generic (PLEG): container finished" podID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerID="578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3" exitCode=0 Feb 21 08:14:59 crc kubenswrapper[4820]: I0221 08:14:59.766294 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" event={"ID":"4335ce63-5465-40bb-aedb-f31d8c7807fd","Type":"ContainerDied","Data":"578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3"} Feb 21 08:14:59 crc kubenswrapper[4820]: I0221 08:14:59.769180 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6586587ddd-vncjg" event={"ID":"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d","Type":"ContainerStarted","Data":"11861d6d5175d35d2314b25d1793d717cd4d57bb4ff0b720acacb28e4b5c5bd9"} Feb 21 08:14:59 crc kubenswrapper[4820]: I0221 08:14:59.769262 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6586587ddd-vncjg" event={"ID":"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d","Type":"ContainerStarted","Data":"287b5cf3eb5205f19c544a35f0ea17dff1354f8b91f6da2b382af68232aa11a4"} Feb 21 08:14:59 crc kubenswrapper[4820]: I0221 08:14:59.769285 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6586587ddd-vncjg" event={"ID":"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d","Type":"ContainerStarted","Data":"5f47582cecf306ea8d6ece4b7af5c6232216da6b6b11776fed71214288e4fabe"} Feb 21 08:14:59 crc kubenswrapper[4820]: I0221 08:14:59.769742 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:59 crc kubenswrapper[4820]: I0221 08:14:59.812898 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6586587ddd-vncjg" podStartSLOduration=2.812879075 podStartE2EDuration="2.812879075s" podCreationTimestamp="2026-02-21 08:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:14:59.804813417 +0000 UTC m=+5274.837897615" watchObservedRunningTime="2026-02-21 08:14:59.812879075 +0000 UTC m=+5274.845963273" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.149020 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p"] Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.150828 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.153943 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.155218 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.168695 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-config-volume\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.168780 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-secret-volume\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.168855 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7tjp\" (UniqueName: \"kubernetes.io/projected/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-kube-api-access-x7tjp\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.169902 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p"] Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.270757 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-config-volume\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.270841 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-secret-volume\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.270924 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7tjp\" (UniqueName: \"kubernetes.io/projected/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-kube-api-access-x7tjp\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.271720 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-config-volume\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.279450 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-secret-volume\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.288087 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7tjp\" (UniqueName: \"kubernetes.io/projected/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-kube-api-access-x7tjp\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.468437 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67f7f95649-vvsjb"] Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.469840 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.473262 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.474032 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.492852 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67f7f95649-vvsjb"] Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.522059 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.576348 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-public-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.576419 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-config\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.576449 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-ovndb-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.576499 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-httpd-config\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.576536 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs2f7\" (UniqueName: \"kubernetes.io/projected/546bedfc-a666-471b-9a9f-e4f4dd1e629e-kube-api-access-gs2f7\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.576592 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-internal-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.576643 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-combined-ca-bundle\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.678821 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-combined-ca-bundle\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.679172 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-public-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.679197 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-config\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.679220 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-ovndb-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.679280 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-httpd-config\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.679308 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs2f7\" (UniqueName: \"kubernetes.io/projected/546bedfc-a666-471b-9a9f-e4f4dd1e629e-kube-api-access-gs2f7\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.679351 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-internal-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.685690 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-ovndb-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.686980 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-public-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.687909 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-config\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.688446 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-internal-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.688511 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-combined-ca-bundle\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.689567 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-httpd-config\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.703025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs2f7\" (UniqueName: \"kubernetes.io/projected/546bedfc-a666-471b-9a9f-e4f4dd1e629e-kube-api-access-gs2f7\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.789552 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" event={"ID":"4335ce63-5465-40bb-aedb-f31d8c7807fd","Type":"ContainerStarted","Data":"f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf"} Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.789914 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.790069 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.818697 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" podStartSLOduration=3.818678009 podStartE2EDuration="3.818678009s" podCreationTimestamp="2026-02-21 08:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:15:00.808640228 +0000 UTC m=+5275.841724426" watchObservedRunningTime="2026-02-21 08:15:00.818678009 +0000 UTC m=+5275.851762207" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.956835 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p"] Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.077772 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pnwbk"] Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.086856 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pnwbk"] Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.157390 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67f7f95649-vvsjb"] Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.705842 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a608f92-6849-4847-9b75-495f1d27b9cf" path="/var/lib/kubelet/pods/5a608f92-6849-4847-9b75-495f1d27b9cf/volumes" Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.799732 4820 generic.go:334] "Generic (PLEG): container finished" podID="6fce41e0-c5c8-4286-8a58-cd620c05f4fc" containerID="b04b97fcb09f93be41f1283cfb58d7e98542300a672bfd210a8873ecd384f3d2" exitCode=0 Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.799844 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" event={"ID":"6fce41e0-c5c8-4286-8a58-cd620c05f4fc","Type":"ContainerDied","Data":"b04b97fcb09f93be41f1283cfb58d7e98542300a672bfd210a8873ecd384f3d2"} Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.799880 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" event={"ID":"6fce41e0-c5c8-4286-8a58-cd620c05f4fc","Type":"ContainerStarted","Data":"e4332f57da00c4a1d4978769ec6441f0918ffa92115a1413b17aa52a9e83aebe"} Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.803311 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67f7f95649-vvsjb" event={"ID":"546bedfc-a666-471b-9a9f-e4f4dd1e629e","Type":"ContainerStarted","Data":"92dbeea807896385169f86b9d4a8842bfc3b4cdcd6cb220b3168f76a6416a2d4"} Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.803356 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.803373 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67f7f95649-vvsjb" event={"ID":"546bedfc-a666-471b-9a9f-e4f4dd1e629e","Type":"ContainerStarted","Data":"c7267b27d2c5b641794a7c01b72e3f124cbc7e8b026b484f6a960d6994434506"} Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.803385 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67f7f95649-vvsjb" event={"ID":"546bedfc-a666-471b-9a9f-e4f4dd1e629e","Type":"ContainerStarted","Data":"528df5a9372a251b611fb2e5966cabaff9cb4b5c18fae221814dfa7be6ca5c5a"} Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.847216 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67f7f95649-vvsjb" podStartSLOduration=1.847191997 podStartE2EDuration="1.847191997s" podCreationTimestamp="2026-02-21 08:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:15:01.838350978 +0000 UTC m=+5276.871435176" watchObservedRunningTime="2026-02-21 08:15:01.847191997 +0000 UTC m=+5276.880276205" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.167514 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.220203 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-secret-volume\") pod \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.220403 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7tjp\" (UniqueName: \"kubernetes.io/projected/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-kube-api-access-x7tjp\") pod \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.220543 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-config-volume\") pod \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.221826 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "6fce41e0-c5c8-4286-8a58-cd620c05f4fc" (UID: "6fce41e0-c5c8-4286-8a58-cd620c05f4fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.227418 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6fce41e0-c5c8-4286-8a58-cd620c05f4fc" (UID: "6fce41e0-c5c8-4286-8a58-cd620c05f4fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.227506 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-kube-api-access-x7tjp" (OuterVolumeSpecName: "kube-api-access-x7tjp") pod "6fce41e0-c5c8-4286-8a58-cd620c05f4fc" (UID: "6fce41e0-c5c8-4286-8a58-cd620c05f4fc"). InnerVolumeSpecName "kube-api-access-x7tjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.322259 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.322291 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7tjp\" (UniqueName: \"kubernetes.io/projected/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-kube-api-access-x7tjp\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.322300 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.817420 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" event={"ID":"6fce41e0-c5c8-4286-8a58-cd620c05f4fc","Type":"ContainerDied","Data":"e4332f57da00c4a1d4978769ec6441f0918ffa92115a1413b17aa52a9e83aebe"} Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.817457 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.817464 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4332f57da00c4a1d4978769ec6441f0918ffa92115a1413b17aa52a9e83aebe" Feb 21 08:15:04 crc kubenswrapper[4820]: I0221 08:15:04.232146 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb"] Feb 21 08:15:04 crc kubenswrapper[4820]: I0221 08:15:04.240253 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb"] Feb 21 08:15:05 crc kubenswrapper[4820]: I0221 08:15:05.708642 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9686bf95-baf7-4066-8769-66f168be0215" path="/var/lib/kubelet/pods/9686bf95-baf7-4066-8769-66f168be0215/volumes" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.201423 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.287339 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56f68c4f9-lzs2s"] Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.287647 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" podUID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerName="dnsmasq-dns" containerID="cri-o://99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877" gracePeriod=10 Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.755963 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.856481 4820 generic.go:334] "Generic (PLEG): container finished" podID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerID="99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877" exitCode=0 Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.856531 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" event={"ID":"f5b4d95c-af87-417e-a56b-20cb7a43c2e7","Type":"ContainerDied","Data":"99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877"} Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.856562 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" event={"ID":"f5b4d95c-af87-417e-a56b-20cb7a43c2e7","Type":"ContainerDied","Data":"e4f69cac7c8ea8139b87e81abdba2e547b7e3f99598e2c84fa49315dcdd98eeb"} Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.856581 4820 scope.go:117] "RemoveContainer" containerID="99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.856735 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.878781 4820 scope.go:117] "RemoveContainer" containerID="fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.897421 4820 scope.go:117] "RemoveContainer" containerID="99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877" Feb 21 08:15:08 crc kubenswrapper[4820]: E0221 08:15:08.897857 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877\": container with ID starting with 99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877 not found: ID does not exist" containerID="99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.897904 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877"} err="failed to get container status \"99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877\": rpc error: code = NotFound desc = could not find container \"99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877\": container with ID starting with 99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877 not found: ID does not exist" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.897935 4820 scope.go:117] "RemoveContainer" containerID="fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260" Feb 21 08:15:08 crc kubenswrapper[4820]: E0221 08:15:08.898322 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260\": container with ID starting with fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260 not found: ID does not exist" containerID="fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.898345 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260"} err="failed to get container status \"fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260\": rpc error: code = NotFound desc = could not find container \"fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260\": container with ID starting with fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260 not found: ID does not exist" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.910072 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-nb\") pod \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.910163 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-config\") pod \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.910230 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h68z\" (UniqueName: \"kubernetes.io/projected/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-kube-api-access-7h68z\") pod \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.910322 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-sb\") pod \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.910441 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-dns-svc\") pod \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.915630 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-kube-api-access-7h68z" (OuterVolumeSpecName: "kube-api-access-7h68z") pod "f5b4d95c-af87-417e-a56b-20cb7a43c2e7" (UID: "f5b4d95c-af87-417e-a56b-20cb7a43c2e7"). InnerVolumeSpecName "kube-api-access-7h68z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.961182 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f5b4d95c-af87-417e-a56b-20cb7a43c2e7" (UID: "f5b4d95c-af87-417e-a56b-20cb7a43c2e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.963514 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5b4d95c-af87-417e-a56b-20cb7a43c2e7" (UID: "f5b4d95c-af87-417e-a56b-20cb7a43c2e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.967558 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f5b4d95c-af87-417e-a56b-20cb7a43c2e7" (UID: "f5b4d95c-af87-417e-a56b-20cb7a43c2e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.975438 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-config" (OuterVolumeSpecName: "config") pod "f5b4d95c-af87-417e-a56b-20cb7a43c2e7" (UID: "f5b4d95c-af87-417e-a56b-20cb7a43c2e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.013086 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h68z\" (UniqueName: \"kubernetes.io/projected/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-kube-api-access-7h68z\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.013123 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.013135 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.013147 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.013157 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.187291 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56f68c4f9-lzs2s"] Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.196451 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56f68c4f9-lzs2s"] Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.706093 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" path="/var/lib/kubelet/pods/f5b4d95c-af87-417e-a56b-20cb7a43c2e7/volumes" Feb 21 08:15:13 crc kubenswrapper[4820]: I0221 08:15:13.815943 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:15:13 crc kubenswrapper[4820]: I0221 08:15:13.816004 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:15:13 crc kubenswrapper[4820]: I0221 08:15:13.816043 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:15:13 crc kubenswrapper[4820]: I0221 08:15:13.816540 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:15:13 crc kubenswrapper[4820]: I0221 08:15:13.816648 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" gracePeriod=600 Feb 21 08:15:13 crc kubenswrapper[4820]: E0221 08:15:13.939740 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:15:14 crc kubenswrapper[4820]: I0221 08:15:14.914314 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" exitCode=0 Feb 21 08:15:14 crc kubenswrapper[4820]: I0221 08:15:14.914360 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a"} Feb 21 08:15:14 crc kubenswrapper[4820]: I0221 08:15:14.914393 4820 scope.go:117] "RemoveContainer" containerID="8b576e514e31a08e28f68fa4c688b72455bc5c0da6c05b78101822bef0984897" Feb 21 08:15:14 crc kubenswrapper[4820]: I0221 08:15:14.915373 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:15:14 crc kubenswrapper[4820]: E0221 08:15:14.915950 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:15:18 crc kubenswrapper[4820]: I0221 08:15:18.453165 4820 scope.go:117] "RemoveContainer" containerID="c2867835bac0090aaa7273a7c4ef4cb3c7da8d37f816ccb9d979c732e69cab4f" Feb 21 08:15:18 crc kubenswrapper[4820]: I0221 08:15:18.474750 4820 scope.go:117] "RemoveContainer" containerID="3c304cff3e4ea891fe22f2e446f6db20e2204d9e270769a7f2bedb12df9f52ce" Feb 21 08:15:27 crc kubenswrapper[4820]: I0221 08:15:27.696978 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:15:27 crc kubenswrapper[4820]: E0221 08:15:27.697691 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:15:28 crc kubenswrapper[4820]: I0221 08:15:28.308115 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:15:30 crc kubenswrapper[4820]: I0221 08:15:30.802530 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:30 crc kubenswrapper[4820]: I0221 08:15:30.858874 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6586587ddd-vncjg"] Feb 21 08:15:30 crc kubenswrapper[4820]: I0221 08:15:30.859198 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6586587ddd-vncjg" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-api" containerID="cri-o://287b5cf3eb5205f19c544a35f0ea17dff1354f8b91f6da2b382af68232aa11a4" gracePeriod=30 Feb 21 08:15:30 crc kubenswrapper[4820]: I0221 08:15:30.859708 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6586587ddd-vncjg" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-httpd" containerID="cri-o://11861d6d5175d35d2314b25d1793d717cd4d57bb4ff0b720acacb28e4b5c5bd9" gracePeriod=30 Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.224374 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kmzgn"] Feb 21 08:15:31 crc kubenswrapper[4820]: E0221 08:15:31.224696 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fce41e0-c5c8-4286-8a58-cd620c05f4fc" containerName="collect-profiles" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.224713 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fce41e0-c5c8-4286-8a58-cd620c05f4fc" containerName="collect-profiles" Feb 21 08:15:31 crc kubenswrapper[4820]: E0221 08:15:31.224731 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerName="init" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.224737 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerName="init" Feb 21 08:15:31 crc kubenswrapper[4820]: E0221 08:15:31.224753 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerName="dnsmasq-dns" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.224760 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerName="dnsmasq-dns" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.224913 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerName="dnsmasq-dns" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.224928 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fce41e0-c5c8-4286-8a58-cd620c05f4fc" containerName="collect-profiles" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.226082 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.255912 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kmzgn"] Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.256801 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf72s\" (UniqueName: \"kubernetes.io/projected/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-kube-api-access-sf72s\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.256877 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-catalog-content\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.256908 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-utilities\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.360297 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf72s\" (UniqueName: \"kubernetes.io/projected/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-kube-api-access-sf72s\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.361212 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-catalog-content\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.361603 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-utilities\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.361787 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-catalog-content\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.361953 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-utilities\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.383538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf72s\" (UniqueName: \"kubernetes.io/projected/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-kube-api-access-sf72s\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.543816 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.999401 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kmzgn"] Feb 21 08:15:32 crc kubenswrapper[4820]: I0221 08:15:32.060417 4820 generic.go:334] "Generic (PLEG): container finished" podID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerID="11861d6d5175d35d2314b25d1793d717cd4d57bb4ff0b720acacb28e4b5c5bd9" exitCode=0 Feb 21 08:15:32 crc kubenswrapper[4820]: I0221 08:15:32.060444 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6586587ddd-vncjg" event={"ID":"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d","Type":"ContainerDied","Data":"11861d6d5175d35d2314b25d1793d717cd4d57bb4ff0b720acacb28e4b5c5bd9"} Feb 21 08:15:32 crc kubenswrapper[4820]: I0221 08:15:32.061579 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmzgn" event={"ID":"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a","Type":"ContainerStarted","Data":"8f92b520fb9e9df10aafe11fead045065f6b3ec213cdd43575b3f0ea407190ec"} Feb 21 08:15:33 crc kubenswrapper[4820]: I0221 08:15:33.071429 4820 generic.go:334] "Generic (PLEG): container finished" podID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerID="22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9" exitCode=0 Feb 21 08:15:33 crc kubenswrapper[4820]: I0221 08:15:33.071717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmzgn" event={"ID":"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a","Type":"ContainerDied","Data":"22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9"} Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.678555 4820 generic.go:334] "Generic (PLEG): container finished" podID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerID="287b5cf3eb5205f19c544a35f0ea17dff1354f8b91f6da2b382af68232aa11a4" exitCode=0 Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.679146 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6586587ddd-vncjg" event={"ID":"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d","Type":"ContainerDied","Data":"287b5cf3eb5205f19c544a35f0ea17dff1354f8b91f6da2b382af68232aa11a4"} Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.909516 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.971904 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbdqc\" (UniqueName: \"kubernetes.io/projected/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-kube-api-access-sbdqc\") pod \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.971954 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-combined-ca-bundle\") pod \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.972488 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-ovndb-tls-certs\") pod \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.973009 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-config\") pod \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.973169 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-httpd-config\") pod \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.978311 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-kube-api-access-sbdqc" (OuterVolumeSpecName: "kube-api-access-sbdqc") pod "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" (UID: "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d"). InnerVolumeSpecName "kube-api-access-sbdqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.978449 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" (UID: "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.021784 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-config" (OuterVolumeSpecName: "config") pod "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" (UID: "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.030347 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" (UID: "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.041596 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" (UID: "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.076467 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.076520 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.076534 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbdqc\" (UniqueName: \"kubernetes.io/projected/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-kube-api-access-sbdqc\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.076546 4820 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.076607 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.688148 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6586587ddd-vncjg" event={"ID":"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d","Type":"ContainerDied","Data":"5f47582cecf306ea8d6ece4b7af5c6232216da6b6b11776fed71214288e4fabe"} Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.688187 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.688478 4820 scope.go:117] "RemoveContainer" containerID="11861d6d5175d35d2314b25d1793d717cd4d57bb4ff0b720acacb28e4b5c5bd9" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.691283 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmzgn" event={"ID":"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a","Type":"ContainerStarted","Data":"7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b"} Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.708711 4820 scope.go:117] "RemoveContainer" containerID="287b5cf3eb5205f19c544a35f0ea17dff1354f8b91f6da2b382af68232aa11a4" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.732214 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6586587ddd-vncjg"] Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.739586 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6586587ddd-vncjg"] Feb 21 08:15:37 crc kubenswrapper[4820]: I0221 08:15:37.703731 4820 generic.go:334] "Generic (PLEG): container finished" podID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerID="7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b" exitCode=0 Feb 21 08:15:37 crc kubenswrapper[4820]: I0221 08:15:37.709542 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" path="/var/lib/kubelet/pods/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d/volumes" Feb 21 08:15:37 crc kubenswrapper[4820]: I0221 08:15:37.710123 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmzgn" event={"ID":"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a","Type":"ContainerDied","Data":"7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b"} Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.216696 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7rrmn"] Feb 21 08:15:39 crc kubenswrapper[4820]: E0221 08:15:39.217528 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-httpd" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.217541 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-httpd" Feb 21 08:15:39 crc kubenswrapper[4820]: E0221 08:15:39.217548 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-api" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.217555 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-api" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.217713 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-api" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.217725 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-httpd" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.218228 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.223792 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.223999 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.224103 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.224120 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.225407 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-kqpjg" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.227219 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-combined-ca-bundle\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.227314 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-dispersionconf\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.227356 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-ring-data-devices\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.227379 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-scripts\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.227585 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlqms\" (UniqueName: \"kubernetes.io/projected/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-kube-api-access-wlqms\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.227745 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-etc-swift\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.227830 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-swiftconf\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.291158 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7rrmn"] Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.301318 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-8nnn7"] Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.302461 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.310866 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-7rrmn"] Feb 21 08:15:39 crc kubenswrapper[4820]: E0221 08:15:39.319763 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-wlqms ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-7rrmn" podUID="d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.331771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-scripts\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332034 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-combined-ca-bundle\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332120 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-combined-ca-bundle\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332208 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66vhc\" (UniqueName: \"kubernetes.io/projected/867214ab-adcb-4e78-838b-a16cda8f543c-kube-api-access-66vhc\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/867214ab-adcb-4e78-838b-a16cda8f543c-etc-swift\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332391 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-dispersionconf\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332467 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-ring-data-devices\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332544 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-scripts\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332820 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-dispersionconf\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332947 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlqms\" (UniqueName: \"kubernetes.io/projected/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-kube-api-access-wlqms\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.333045 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-etc-swift\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.333124 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-swiftconf\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.333188 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-swiftconf\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.333302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-ring-data-devices\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.336549 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-scripts\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.336773 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-etc-swift\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.337566 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-ring-data-devices\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.344297 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8nnn7"] Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.350182 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-swiftconf\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.353228 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-combined-ca-bundle\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.361630 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-dispersionconf\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.376353 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dfcbcff-8zx84"] Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.377885 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.379487 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlqms\" (UniqueName: \"kubernetes.io/projected/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-kube-api-access-wlqms\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.407882 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dfcbcff-8zx84"] Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.436068 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mw5d\" (UniqueName: \"kubernetes.io/projected/d72eaa53-54ec-46af-91c3-fcf248385b34-kube-api-access-4mw5d\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.436388 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-scripts\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.436821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-config\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437030 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437114 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-combined-ca-bundle\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437197 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-dns-svc\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437296 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437379 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66vhc\" (UniqueName: \"kubernetes.io/projected/867214ab-adcb-4e78-838b-a16cda8f543c-kube-api-access-66vhc\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437461 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/867214ab-adcb-4e78-838b-a16cda8f543c-etc-swift\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437543 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-dispersionconf\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-swiftconf\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437759 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-ring-data-devices\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.438073 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-scripts\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.438432 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-ring-data-devices\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.438716 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/867214ab-adcb-4e78-838b-a16cda8f543c-etc-swift\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.442164 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-swiftconf\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.443575 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-dispersionconf\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.444989 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-combined-ca-bundle\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.458304 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66vhc\" (UniqueName: \"kubernetes.io/projected/867214ab-adcb-4e78-838b-a16cda8f543c-kube-api-access-66vhc\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.544062 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mw5d\" (UniqueName: \"kubernetes.io/projected/d72eaa53-54ec-46af-91c3-fcf248385b34-kube-api-access-4mw5d\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.544129 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-config\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.544158 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.544187 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-dns-svc\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.544206 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.545191 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.546061 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-config\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.549098 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.549617 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-dns-svc\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.583051 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mw5d\" (UniqueName: \"kubernetes.io/projected/d72eaa53-54ec-46af-91c3-fcf248385b34-kube-api-access-4mw5d\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.620634 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.742084 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.742519 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.743478 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmzgn" event={"ID":"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a","Type":"ContainerStarted","Data":"8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673"} Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.809954 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kmzgn" podStartSLOduration=3.211232882 podStartE2EDuration="8.809928925s" podCreationTimestamp="2026-02-21 08:15:31 +0000 UTC" firstStartedPulling="2026-02-21 08:15:33.074403273 +0000 UTC m=+5308.107487471" lastFinishedPulling="2026-02-21 08:15:38.673099316 +0000 UTC m=+5313.706183514" observedRunningTime="2026-02-21 08:15:39.794743114 +0000 UTC m=+5314.827827432" watchObservedRunningTime="2026-02-21 08:15:39.809928925 +0000 UTC m=+5314.843013123" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.873015 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050125 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-scripts\") pod \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050479 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-dispersionconf\") pod \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050516 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-swiftconf\") pod \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050540 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-ring-data-devices\") pod \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050563 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-combined-ca-bundle\") pod \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050628 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-etc-swift\") pod \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlqms\" (UniqueName: \"kubernetes.io/projected/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-kube-api-access-wlqms\") pod \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050958 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-scripts" (OuterVolumeSpecName: "scripts") pod "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" (UID: "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.051111 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" (UID: "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.051311 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" (UID: "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.057158 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" (UID: "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.060174 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" (UID: "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.060285 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" (UID: "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.061397 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-kube-api-access-wlqms" (OuterVolumeSpecName: "kube-api-access-wlqms") pod "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" (UID: "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b"). InnerVolumeSpecName "kube-api-access-wlqms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.153570 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.153606 4820 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.153621 4820 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.153631 4820 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.153643 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.153653 4820 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.153664 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlqms\" (UniqueName: \"kubernetes.io/projected/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-kube-api-access-wlqms\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.182925 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8nnn7"] Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.303386 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dfcbcff-8zx84"] Feb 21 08:15:40 crc kubenswrapper[4820]: W0221 08:15:40.305958 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd72eaa53_54ec_46af_91c3_fcf248385b34.slice/crio-2f99dfc7ad3da5248fb62aa73368b7c0f9aa9159b1578a6a81f20720fbce41ff WatchSource:0}: Error finding container 2f99dfc7ad3da5248fb62aa73368b7c0f9aa9159b1578a6a81f20720fbce41ff: Status 404 returned error can't find the container with id 2f99dfc7ad3da5248fb62aa73368b7c0f9aa9159b1578a6a81f20720fbce41ff Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.763282 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8nnn7" event={"ID":"867214ab-adcb-4e78-838b-a16cda8f543c","Type":"ContainerStarted","Data":"4bd963e4d1f0445acbcb335e2304cd1d582a06226ce3744e303ccdb9e0f9b192"} Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.767062 4820 generic.go:334] "Generic (PLEG): container finished" podID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerID="093a84d21a6636251da79290a491d1bbf076f8c343441c7e7b5d8f0efd814896" exitCode=0 Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.767385 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" event={"ID":"d72eaa53-54ec-46af-91c3-fcf248385b34","Type":"ContainerDied","Data":"093a84d21a6636251da79290a491d1bbf076f8c343441c7e7b5d8f0efd814896"} Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.767508 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" event={"ID":"d72eaa53-54ec-46af-91c3-fcf248385b34","Type":"ContainerStarted","Data":"2f99dfc7ad3da5248fb62aa73368b7c0f9aa9159b1578a6a81f20720fbce41ff"} Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.767775 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.002014 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-7rrmn"] Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.011482 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-7rrmn"] Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.551372 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.551630 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.709480 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" path="/var/lib/kubelet/pods/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b/volumes" Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.780003 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" event={"ID":"d72eaa53-54ec-46af-91c3-fcf248385b34","Type":"ContainerStarted","Data":"b0e87fab33e7400789b46a574269ce095e4c5e8100d4eb2abda1c5d023d41eb0"} Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.780053 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.805768 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" podStartSLOduration=2.805745555 podStartE2EDuration="2.805745555s" podCreationTimestamp="2026-02-21 08:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:15:41.802999361 +0000 UTC m=+5316.836083559" watchObservedRunningTime="2026-02-21 08:15:41.805745555 +0000 UTC m=+5316.838829753" Feb 21 08:15:42 crc kubenswrapper[4820]: I0221 08:15:42.602694 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kmzgn" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="registry-server" probeResult="failure" output=< Feb 21 08:15:42 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:15:42 crc kubenswrapper[4820]: > Feb 21 08:15:42 crc kubenswrapper[4820]: I0221 08:15:42.696739 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:15:42 crc kubenswrapper[4820]: E0221 08:15:42.697149 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.848134 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-cc65c7f54-9sg96"] Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.849825 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.851976 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.852328 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.853255 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.865599 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-cc65c7f54-9sg96"] Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929020 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-internal-tls-certs\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929408 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-run-httpd\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929457 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-combined-ca-bundle\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929472 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-etc-swift\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929649 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-config-data\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929698 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-log-httpd\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929776 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdqwv\" (UniqueName: \"kubernetes.io/projected/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-kube-api-access-vdqwv\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929815 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-public-tls-certs\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdqwv\" (UniqueName: \"kubernetes.io/projected/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-kube-api-access-vdqwv\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031262 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-public-tls-certs\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031303 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-internal-tls-certs\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031333 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-run-httpd\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031374 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-combined-ca-bundle\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031394 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-etc-swift\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031428 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-config-data\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-log-httpd\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.032037 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-log-httpd\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.032596 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-run-httpd\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.038076 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-config-data\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.038331 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-etc-swift\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.040762 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-internal-tls-certs\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.041215 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-public-tls-certs\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.041681 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-combined-ca-bundle\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.050388 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdqwv\" (UniqueName: \"kubernetes.io/projected/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-kube-api-access-vdqwv\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.180641 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:45 crc kubenswrapper[4820]: W0221 08:15:45.418379 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a6cc6cf_14b3_416d_a415_22fbe0dd9b9d.slice/crio-0a58956ac3d969e40afa82793860a36ca0c8bcc47eeb80403d2164d07fab31e1 WatchSource:0}: Error finding container 0a58956ac3d969e40afa82793860a36ca0c8bcc47eeb80403d2164d07fab31e1: Status 404 returned error can't find the container with id 0a58956ac3d969e40afa82793860a36ca0c8bcc47eeb80403d2164d07fab31e1 Feb 21 08:15:45 crc kubenswrapper[4820]: I0221 08:15:45.421121 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-cc65c7f54-9sg96"] Feb 21 08:15:45 crc kubenswrapper[4820]: I0221 08:15:45.814723 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8nnn7" event={"ID":"867214ab-adcb-4e78-838b-a16cda8f543c","Type":"ContainerStarted","Data":"295e6e465ea00ddb64edf2f9ad1cdbf100ee1235cdc25b8479ed9ca490d1c293"} Feb 21 08:15:45 crc kubenswrapper[4820]: I0221 08:15:45.817359 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cc65c7f54-9sg96" event={"ID":"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d","Type":"ContainerStarted","Data":"fc711d1e3d3624a12e28232bfb851ef77cb084672284adb47a256f35df79a888"} Feb 21 08:15:45 crc kubenswrapper[4820]: I0221 08:15:45.817397 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cc65c7f54-9sg96" event={"ID":"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d","Type":"ContainerStarted","Data":"0a58956ac3d969e40afa82793860a36ca0c8bcc47eeb80403d2164d07fab31e1"} Feb 21 08:15:45 crc kubenswrapper[4820]: I0221 08:15:45.831943 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-8nnn7" podStartSLOduration=2.242753654 podStartE2EDuration="6.831922141s" podCreationTimestamp="2026-02-21 08:15:39 +0000 UTC" firstStartedPulling="2026-02-21 08:15:40.186683979 +0000 UTC m=+5315.219768177" lastFinishedPulling="2026-02-21 08:15:44.775852466 +0000 UTC m=+5319.808936664" observedRunningTime="2026-02-21 08:15:45.82970569 +0000 UTC m=+5320.862789898" watchObservedRunningTime="2026-02-21 08:15:45.831922141 +0000 UTC m=+5320.865006339" Feb 21 08:15:46 crc kubenswrapper[4820]: I0221 08:15:46.829823 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cc65c7f54-9sg96" event={"ID":"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d","Type":"ContainerStarted","Data":"738a1519d1fe0347989f862b01be1a9eb1eac3eae2efa50e4fa5ed3a4a51f3f0"} Feb 21 08:15:46 crc kubenswrapper[4820]: I0221 08:15:46.855261 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-cc65c7f54-9sg96" podStartSLOduration=3.855220418 podStartE2EDuration="3.855220418s" podCreationTimestamp="2026-02-21 08:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:15:46.850618673 +0000 UTC m=+5321.883702871" watchObservedRunningTime="2026-02-21 08:15:46.855220418 +0000 UTC m=+5321.888304626" Feb 21 08:15:47 crc kubenswrapper[4820]: I0221 08:15:47.836419 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:47 crc kubenswrapper[4820]: I0221 08:15:47.836738 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:49 crc kubenswrapper[4820]: I0221 08:15:49.743554 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:49 crc kubenswrapper[4820]: I0221 08:15:49.799886 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c57cf7787-z47mk"] Feb 21 08:15:49 crc kubenswrapper[4820]: I0221 08:15:49.800152 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" podUID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerName="dnsmasq-dns" containerID="cri-o://f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf" gracePeriod=10 Feb 21 08:15:49 crc kubenswrapper[4820]: I0221 08:15:49.855825 4820 generic.go:334] "Generic (PLEG): container finished" podID="867214ab-adcb-4e78-838b-a16cda8f543c" containerID="295e6e465ea00ddb64edf2f9ad1cdbf100ee1235cdc25b8479ed9ca490d1c293" exitCode=0 Feb 21 08:15:49 crc kubenswrapper[4820]: I0221 08:15:49.855867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8nnn7" event={"ID":"867214ab-adcb-4e78-838b-a16cda8f543c","Type":"ContainerDied","Data":"295e6e465ea00ddb64edf2f9ad1cdbf100ee1235cdc25b8479ed9ca490d1c293"} Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.285897 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.385692 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-config\") pod \"4335ce63-5465-40bb-aedb-f31d8c7807fd\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.385774 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-dns-svc\") pod \"4335ce63-5465-40bb-aedb-f31d8c7807fd\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.386541 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfsf6\" (UniqueName: \"kubernetes.io/projected/4335ce63-5465-40bb-aedb-f31d8c7807fd-kube-api-access-qfsf6\") pod \"4335ce63-5465-40bb-aedb-f31d8c7807fd\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.386705 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-nb\") pod \"4335ce63-5465-40bb-aedb-f31d8c7807fd\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.386734 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-sb\") pod \"4335ce63-5465-40bb-aedb-f31d8c7807fd\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.391684 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4335ce63-5465-40bb-aedb-f31d8c7807fd-kube-api-access-qfsf6" (OuterVolumeSpecName: "kube-api-access-qfsf6") pod "4335ce63-5465-40bb-aedb-f31d8c7807fd" (UID: "4335ce63-5465-40bb-aedb-f31d8c7807fd"). InnerVolumeSpecName "kube-api-access-qfsf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.448267 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4335ce63-5465-40bb-aedb-f31d8c7807fd" (UID: "4335ce63-5465-40bb-aedb-f31d8c7807fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.451792 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4335ce63-5465-40bb-aedb-f31d8c7807fd" (UID: "4335ce63-5465-40bb-aedb-f31d8c7807fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.456706 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4335ce63-5465-40bb-aedb-f31d8c7807fd" (UID: "4335ce63-5465-40bb-aedb-f31d8c7807fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.483452 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-config" (OuterVolumeSpecName: "config") pod "4335ce63-5465-40bb-aedb-f31d8c7807fd" (UID: "4335ce63-5465-40bb-aedb-f31d8c7807fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.488846 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.488886 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.488900 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.488912 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.488924 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfsf6\" (UniqueName: \"kubernetes.io/projected/4335ce63-5465-40bb-aedb-f31d8c7807fd-kube-api-access-qfsf6\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.865918 4820 generic.go:334] "Generic (PLEG): container finished" podID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerID="f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf" exitCode=0 Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.866022 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.866077 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" event={"ID":"4335ce63-5465-40bb-aedb-f31d8c7807fd","Type":"ContainerDied","Data":"f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf"} Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.866100 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" event={"ID":"4335ce63-5465-40bb-aedb-f31d8c7807fd","Type":"ContainerDied","Data":"aaa04799154f4e72c5b03417ed41306779ddc85795c0bc38fbe0c0a1449205db"} Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.866113 4820 scope.go:117] "RemoveContainer" containerID="f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.893454 4820 scope.go:117] "RemoveContainer" containerID="578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.901794 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c57cf7787-z47mk"] Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.910096 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c57cf7787-z47mk"] Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.925710 4820 scope.go:117] "RemoveContainer" containerID="f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf" Feb 21 08:15:50 crc kubenswrapper[4820]: E0221 08:15:50.926491 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf\": container with ID starting with f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf not found: ID does not exist" containerID="f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.926532 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf"} err="failed to get container status \"f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf\": rpc error: code = NotFound desc = could not find container \"f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf\": container with ID starting with f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf not found: ID does not exist" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.926556 4820 scope.go:117] "RemoveContainer" containerID="578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3" Feb 21 08:15:50 crc kubenswrapper[4820]: E0221 08:15:50.927183 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3\": container with ID starting with 578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3 not found: ID does not exist" containerID="578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.927219 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3"} err="failed to get container status \"578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3\": rpc error: code = NotFound desc = could not find container \"578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3\": container with ID starting with 578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3 not found: ID does not exist" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.212936 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.301525 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-ring-data-devices\") pod \"867214ab-adcb-4e78-838b-a16cda8f543c\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.301582 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66vhc\" (UniqueName: \"kubernetes.io/projected/867214ab-adcb-4e78-838b-a16cda8f543c-kube-api-access-66vhc\") pod \"867214ab-adcb-4e78-838b-a16cda8f543c\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.301668 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-dispersionconf\") pod \"867214ab-adcb-4e78-838b-a16cda8f543c\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.301710 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-swiftconf\") pod \"867214ab-adcb-4e78-838b-a16cda8f543c\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.301751 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-scripts\") pod \"867214ab-adcb-4e78-838b-a16cda8f543c\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.301788 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/867214ab-adcb-4e78-838b-a16cda8f543c-etc-swift\") pod \"867214ab-adcb-4e78-838b-a16cda8f543c\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.301840 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-combined-ca-bundle\") pod \"867214ab-adcb-4e78-838b-a16cda8f543c\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.302338 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "867214ab-adcb-4e78-838b-a16cda8f543c" (UID: "867214ab-adcb-4e78-838b-a16cda8f543c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.302770 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/867214ab-adcb-4e78-838b-a16cda8f543c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "867214ab-adcb-4e78-838b-a16cda8f543c" (UID: "867214ab-adcb-4e78-838b-a16cda8f543c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.305504 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/867214ab-adcb-4e78-838b-a16cda8f543c-kube-api-access-66vhc" (OuterVolumeSpecName: "kube-api-access-66vhc") pod "867214ab-adcb-4e78-838b-a16cda8f543c" (UID: "867214ab-adcb-4e78-838b-a16cda8f543c"). InnerVolumeSpecName "kube-api-access-66vhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.307061 4820 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.307580 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "867214ab-adcb-4e78-838b-a16cda8f543c" (UID: "867214ab-adcb-4e78-838b-a16cda8f543c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.325791 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66vhc\" (UniqueName: \"kubernetes.io/projected/867214ab-adcb-4e78-838b-a16cda8f543c-kube-api-access-66vhc\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.325830 4820 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/867214ab-adcb-4e78-838b-a16cda8f543c-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.326637 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-scripts" (OuterVolumeSpecName: "scripts") pod "867214ab-adcb-4e78-838b-a16cda8f543c" (UID: "867214ab-adcb-4e78-838b-a16cda8f543c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.328556 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "867214ab-adcb-4e78-838b-a16cda8f543c" (UID: "867214ab-adcb-4e78-838b-a16cda8f543c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.330143 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "867214ab-adcb-4e78-838b-a16cda8f543c" (UID: "867214ab-adcb-4e78-838b-a16cda8f543c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.427962 4820 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.428250 4820 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.428263 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.428275 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.600180 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.656801 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.718915 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4335ce63-5465-40bb-aedb-f31d8c7807fd" path="/var/lib/kubelet/pods/4335ce63-5465-40bb-aedb-f31d8c7807fd/volumes" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.835114 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kmzgn"] Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.910553 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.911057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8nnn7" event={"ID":"867214ab-adcb-4e78-838b-a16cda8f543c","Type":"ContainerDied","Data":"4bd963e4d1f0445acbcb335e2304cd1d582a06226ce3744e303ccdb9e0f9b192"} Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.911079 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bd963e4d1f0445acbcb335e2304cd1d582a06226ce3744e303ccdb9e0f9b192" Feb 21 08:15:52 crc kubenswrapper[4820]: I0221 08:15:52.917273 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kmzgn" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="registry-server" containerID="cri-o://8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673" gracePeriod=2 Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.407249 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.569342 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-utilities\") pod \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.569410 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-catalog-content\") pod \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.569586 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf72s\" (UniqueName: \"kubernetes.io/projected/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-kube-api-access-sf72s\") pod \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.570381 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-utilities" (OuterVolumeSpecName: "utilities") pod "cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" (UID: "cd0443dd-6920-4f8b-bae9-b6bfe07bde2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.575500 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-kube-api-access-sf72s" (OuterVolumeSpecName: "kube-api-access-sf72s") pod "cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" (UID: "cd0443dd-6920-4f8b-bae9-b6bfe07bde2a"). InnerVolumeSpecName "kube-api-access-sf72s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.671152 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf72s\" (UniqueName: \"kubernetes.io/projected/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-kube-api-access-sf72s\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.671193 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.703797 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" (UID: "cd0443dd-6920-4f8b-bae9-b6bfe07bde2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.773475 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.929034 4820 generic.go:334] "Generic (PLEG): container finished" podID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerID="8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673" exitCode=0 Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.929096 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.929125 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmzgn" event={"ID":"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a","Type":"ContainerDied","Data":"8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673"} Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.929518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmzgn" event={"ID":"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a","Type":"ContainerDied","Data":"8f92b520fb9e9df10aafe11fead045065f6b3ec213cdd43575b3f0ea407190ec"} Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.929547 4820 scope.go:117] "RemoveContainer" containerID="8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.957341 4820 scope.go:117] "RemoveContainer" containerID="7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.957734 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kmzgn"] Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.965310 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kmzgn"] Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.978393 4820 scope.go:117] "RemoveContainer" containerID="22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.016565 4820 scope.go:117] "RemoveContainer" containerID="8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673" Feb 21 08:15:54 crc kubenswrapper[4820]: E0221 08:15:54.016989 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673\": container with ID starting with 8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673 not found: ID does not exist" containerID="8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.017020 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673"} err="failed to get container status \"8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673\": rpc error: code = NotFound desc = could not find container \"8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673\": container with ID starting with 8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673 not found: ID does not exist" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.017039 4820 scope.go:117] "RemoveContainer" containerID="7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b" Feb 21 08:15:54 crc kubenswrapper[4820]: E0221 08:15:54.018576 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b\": container with ID starting with 7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b not found: ID does not exist" containerID="7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.018623 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b"} err="failed to get container status \"7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b\": rpc error: code = NotFound desc = could not find container \"7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b\": container with ID starting with 7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b not found: ID does not exist" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.018651 4820 scope.go:117] "RemoveContainer" containerID="22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9" Feb 21 08:15:54 crc kubenswrapper[4820]: E0221 08:15:54.018995 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9\": container with ID starting with 22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9 not found: ID does not exist" containerID="22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.019016 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9"} err="failed to get container status \"22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9\": rpc error: code = NotFound desc = could not find container \"22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9\": container with ID starting with 22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9 not found: ID does not exist" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.186586 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.187366 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.696651 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:15:54 crc kubenswrapper[4820]: E0221 08:15:54.697110 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:15:55 crc kubenswrapper[4820]: I0221 08:15:55.708480 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" path="/var/lib/kubelet/pods/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a/volumes" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.751811 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-w9mkt"] Feb 21 08:15:59 crc kubenswrapper[4820]: E0221 08:15:59.752563 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerName="init" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752584 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerName="init" Feb 21 08:15:59 crc kubenswrapper[4820]: E0221 08:15:59.752601 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="extract-content" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752611 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="extract-content" Feb 21 08:15:59 crc kubenswrapper[4820]: E0221 08:15:59.752629 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867214ab-adcb-4e78-838b-a16cda8f543c" containerName="swift-ring-rebalance" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752637 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="867214ab-adcb-4e78-838b-a16cda8f543c" containerName="swift-ring-rebalance" Feb 21 08:15:59 crc kubenswrapper[4820]: E0221 08:15:59.752654 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerName="dnsmasq-dns" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752663 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerName="dnsmasq-dns" Feb 21 08:15:59 crc kubenswrapper[4820]: E0221 08:15:59.752692 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="registry-server" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752700 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="registry-server" Feb 21 08:15:59 crc kubenswrapper[4820]: E0221 08:15:59.752710 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="extract-utilities" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752717 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="extract-utilities" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752917 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="867214ab-adcb-4e78-838b-a16cda8f543c" containerName="swift-ring-rebalance" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752942 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerName="dnsmasq-dns" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752965 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="registry-server" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.753672 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w9mkt" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.760363 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w9mkt"] Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.858375 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-30a5-account-create-update-vlqzg"] Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.859640 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.861551 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.868339 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-30a5-account-create-update-vlqzg"] Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.874192 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvfff\" (UniqueName: \"kubernetes.io/projected/526239a3-9756-4dd4-9e38-6474bd1b2709-kube-api-access-vvfff\") pod \"cinder-db-create-w9mkt\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " pod="openstack/cinder-db-create-w9mkt" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.874257 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526239a3-9756-4dd4-9e38-6474bd1b2709-operator-scripts\") pod \"cinder-db-create-w9mkt\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " pod="openstack/cinder-db-create-w9mkt" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.976399 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvfff\" (UniqueName: \"kubernetes.io/projected/526239a3-9756-4dd4-9e38-6474bd1b2709-kube-api-access-vvfff\") pod \"cinder-db-create-w9mkt\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " pod="openstack/cinder-db-create-w9mkt" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.976473 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fps8c\" (UniqueName: \"kubernetes.io/projected/b19f4a26-20d3-44b1-a159-3fd72a92e68f-kube-api-access-fps8c\") pod \"cinder-30a5-account-create-update-vlqzg\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.976511 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526239a3-9756-4dd4-9e38-6474bd1b2709-operator-scripts\") pod \"cinder-db-create-w9mkt\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " pod="openstack/cinder-db-create-w9mkt" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.976588 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19f4a26-20d3-44b1-a159-3fd72a92e68f-operator-scripts\") pod \"cinder-30a5-account-create-update-vlqzg\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.977965 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526239a3-9756-4dd4-9e38-6474bd1b2709-operator-scripts\") pod \"cinder-db-create-w9mkt\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " pod="openstack/cinder-db-create-w9mkt" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.003005 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvfff\" (UniqueName: \"kubernetes.io/projected/526239a3-9756-4dd4-9e38-6474bd1b2709-kube-api-access-vvfff\") pod \"cinder-db-create-w9mkt\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " pod="openstack/cinder-db-create-w9mkt" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.078427 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w9mkt" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.079142 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fps8c\" (UniqueName: \"kubernetes.io/projected/b19f4a26-20d3-44b1-a159-3fd72a92e68f-kube-api-access-fps8c\") pod \"cinder-30a5-account-create-update-vlqzg\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.079333 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19f4a26-20d3-44b1-a159-3fd72a92e68f-operator-scripts\") pod \"cinder-30a5-account-create-update-vlqzg\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.080194 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19f4a26-20d3-44b1-a159-3fd72a92e68f-operator-scripts\") pod \"cinder-30a5-account-create-update-vlqzg\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.096977 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fps8c\" (UniqueName: \"kubernetes.io/projected/b19f4a26-20d3-44b1-a159-3fd72a92e68f-kube-api-access-fps8c\") pod \"cinder-30a5-account-create-update-vlqzg\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.174256 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.519318 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w9mkt"] Feb 21 08:16:00 crc kubenswrapper[4820]: W0221 08:16:00.522498 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod526239a3_9756_4dd4_9e38_6474bd1b2709.slice/crio-c801e109fcb5160ae90f3cf5f84049471f0083fae9bc74531aa29485766d40f7 WatchSource:0}: Error finding container c801e109fcb5160ae90f3cf5f84049471f0083fae9bc74531aa29485766d40f7: Status 404 returned error can't find the container with id c801e109fcb5160ae90f3cf5f84049471f0083fae9bc74531aa29485766d40f7 Feb 21 08:16:00 crc kubenswrapper[4820]: W0221 08:16:00.618859 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb19f4a26_20d3_44b1_a159_3fd72a92e68f.slice/crio-c2f2e206ac7c045b42cbd5d57388e5db269f31a14363b6793e9e9c42f2473dce WatchSource:0}: Error finding container c2f2e206ac7c045b42cbd5d57388e5db269f31a14363b6793e9e9c42f2473dce: Status 404 returned error can't find the container with id c2f2e206ac7c045b42cbd5d57388e5db269f31a14363b6793e9e9c42f2473dce Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.624529 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-30a5-account-create-update-vlqzg"] Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.997958 4820 generic.go:334] "Generic (PLEG): container finished" podID="526239a3-9756-4dd4-9e38-6474bd1b2709" containerID="9cc15cd98cb2ee5a66c67dae3b7781ebaef37c8edf2a66d0058beed46e459cfa" exitCode=0 Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.998056 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w9mkt" event={"ID":"526239a3-9756-4dd4-9e38-6474bd1b2709","Type":"ContainerDied","Data":"9cc15cd98cb2ee5a66c67dae3b7781ebaef37c8edf2a66d0058beed46e459cfa"} Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.998334 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w9mkt" event={"ID":"526239a3-9756-4dd4-9e38-6474bd1b2709","Type":"ContainerStarted","Data":"c801e109fcb5160ae90f3cf5f84049471f0083fae9bc74531aa29485766d40f7"} Feb 21 08:16:01 crc kubenswrapper[4820]: I0221 08:16:01.006657 4820 generic.go:334] "Generic (PLEG): container finished" podID="b19f4a26-20d3-44b1-a159-3fd72a92e68f" containerID="2ecad43a902d533086cc0d59299eabdf5fed0eb7581600161e0b6b859242cab9" exitCode=0 Feb 21 08:16:01 crc kubenswrapper[4820]: I0221 08:16:01.006706 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-30a5-account-create-update-vlqzg" event={"ID":"b19f4a26-20d3-44b1-a159-3fd72a92e68f","Type":"ContainerDied","Data":"2ecad43a902d533086cc0d59299eabdf5fed0eb7581600161e0b6b859242cab9"} Feb 21 08:16:01 crc kubenswrapper[4820]: I0221 08:16:01.006733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-30a5-account-create-update-vlqzg" event={"ID":"b19f4a26-20d3-44b1-a159-3fd72a92e68f","Type":"ContainerStarted","Data":"c2f2e206ac7c045b42cbd5d57388e5db269f31a14363b6793e9e9c42f2473dce"} Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.397377 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.401891 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w9mkt" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.522139 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526239a3-9756-4dd4-9e38-6474bd1b2709-operator-scripts\") pod \"526239a3-9756-4dd4-9e38-6474bd1b2709\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.522207 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvfff\" (UniqueName: \"kubernetes.io/projected/526239a3-9756-4dd4-9e38-6474bd1b2709-kube-api-access-vvfff\") pod \"526239a3-9756-4dd4-9e38-6474bd1b2709\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.522333 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fps8c\" (UniqueName: \"kubernetes.io/projected/b19f4a26-20d3-44b1-a159-3fd72a92e68f-kube-api-access-fps8c\") pod \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.522490 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19f4a26-20d3-44b1-a159-3fd72a92e68f-operator-scripts\") pod \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.522616 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526239a3-9756-4dd4-9e38-6474bd1b2709-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "526239a3-9756-4dd4-9e38-6474bd1b2709" (UID: "526239a3-9756-4dd4-9e38-6474bd1b2709"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.522919 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526239a3-9756-4dd4-9e38-6474bd1b2709-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.523327 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b19f4a26-20d3-44b1-a159-3fd72a92e68f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b19f4a26-20d3-44b1-a159-3fd72a92e68f" (UID: "b19f4a26-20d3-44b1-a159-3fd72a92e68f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.526987 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19f4a26-20d3-44b1-a159-3fd72a92e68f-kube-api-access-fps8c" (OuterVolumeSpecName: "kube-api-access-fps8c") pod "b19f4a26-20d3-44b1-a159-3fd72a92e68f" (UID: "b19f4a26-20d3-44b1-a159-3fd72a92e68f"). InnerVolumeSpecName "kube-api-access-fps8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.529258 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/526239a3-9756-4dd4-9e38-6474bd1b2709-kube-api-access-vvfff" (OuterVolumeSpecName: "kube-api-access-vvfff") pod "526239a3-9756-4dd4-9e38-6474bd1b2709" (UID: "526239a3-9756-4dd4-9e38-6474bd1b2709"). InnerVolumeSpecName "kube-api-access-vvfff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.625316 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19f4a26-20d3-44b1-a159-3fd72a92e68f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.625371 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvfff\" (UniqueName: \"kubernetes.io/projected/526239a3-9756-4dd4-9e38-6474bd1b2709-kube-api-access-vvfff\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.625385 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fps8c\" (UniqueName: \"kubernetes.io/projected/b19f4a26-20d3-44b1-a159-3fd72a92e68f-kube-api-access-fps8c\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:03 crc kubenswrapper[4820]: I0221 08:16:03.021786 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w9mkt" Feb 21 08:16:03 crc kubenswrapper[4820]: I0221 08:16:03.021780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w9mkt" event={"ID":"526239a3-9756-4dd4-9e38-6474bd1b2709","Type":"ContainerDied","Data":"c801e109fcb5160ae90f3cf5f84049471f0083fae9bc74531aa29485766d40f7"} Feb 21 08:16:03 crc kubenswrapper[4820]: I0221 08:16:03.021896 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c801e109fcb5160ae90f3cf5f84049471f0083fae9bc74531aa29485766d40f7" Feb 21 08:16:03 crc kubenswrapper[4820]: I0221 08:16:03.023288 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-30a5-account-create-update-vlqzg" event={"ID":"b19f4a26-20d3-44b1-a159-3fd72a92e68f","Type":"ContainerDied","Data":"c2f2e206ac7c045b42cbd5d57388e5db269f31a14363b6793e9e9c42f2473dce"} Feb 21 08:16:03 crc kubenswrapper[4820]: I0221 08:16:03.023311 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:16:03 crc kubenswrapper[4820]: I0221 08:16:03.023326 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2f2e206ac7c045b42cbd5d57388e5db269f31a14363b6793e9e9c42f2473dce" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.137485 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-6fhr4"] Feb 21 08:16:05 crc kubenswrapper[4820]: E0221 08:16:05.138067 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526239a3-9756-4dd4-9e38-6474bd1b2709" containerName="mariadb-database-create" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.138079 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="526239a3-9756-4dd4-9e38-6474bd1b2709" containerName="mariadb-database-create" Feb 21 08:16:05 crc kubenswrapper[4820]: E0221 08:16:05.138120 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19f4a26-20d3-44b1-a159-3fd72a92e68f" containerName="mariadb-account-create-update" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.138138 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19f4a26-20d3-44b1-a159-3fd72a92e68f" containerName="mariadb-account-create-update" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.138300 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="526239a3-9756-4dd4-9e38-6474bd1b2709" containerName="mariadb-database-create" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.138312 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19f4a26-20d3-44b1-a159-3fd72a92e68f" containerName="mariadb-account-create-update" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.138861 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.141006 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tvzc8" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.141633 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.142184 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.145332 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6fhr4"] Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.265739 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-config-data\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.265816 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-combined-ca-bundle\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.265863 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-db-sync-config-data\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.265887 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9x56\" (UniqueName: \"kubernetes.io/projected/918975eb-d5b2-4b0e-9b35-36e92f03527b-kube-api-access-q9x56\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.266169 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918975eb-d5b2-4b0e-9b35-36e92f03527b-etc-machine-id\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.266412 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-scripts\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.368400 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-scripts\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.368522 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-config-data\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.368590 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-combined-ca-bundle\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.368634 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-db-sync-config-data\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.368659 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9x56\" (UniqueName: \"kubernetes.io/projected/918975eb-d5b2-4b0e-9b35-36e92f03527b-kube-api-access-q9x56\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.368743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918975eb-d5b2-4b0e-9b35-36e92f03527b-etc-machine-id\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.368859 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918975eb-d5b2-4b0e-9b35-36e92f03527b-etc-machine-id\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.374320 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-db-sync-config-data\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.374984 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-scripts\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.375978 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-config-data\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.376099 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-combined-ca-bundle\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.385686 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9x56\" (UniqueName: \"kubernetes.io/projected/918975eb-d5b2-4b0e-9b35-36e92f03527b-kube-api-access-q9x56\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.455567 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.895290 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6fhr4"] Feb 21 08:16:06 crc kubenswrapper[4820]: I0221 08:16:06.046454 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fhr4" event={"ID":"918975eb-d5b2-4b0e-9b35-36e92f03527b","Type":"ContainerStarted","Data":"7f522b302e0549e17af1e68fc657579d38aa65e21c0e8afd908be171ed725a44"} Feb 21 08:16:09 crc kubenswrapper[4820]: I0221 08:16:09.697017 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:16:09 crc kubenswrapper[4820]: E0221 08:16:09.697927 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:16:18 crc kubenswrapper[4820]: I0221 08:16:18.570862 4820 scope.go:117] "RemoveContainer" containerID="7db30347c12dd2be7f43e71cdb85bf1d17d0f2f0e04cb11cd9773d0e72d380c5" Feb 21 08:16:20 crc kubenswrapper[4820]: I0221 08:16:20.696929 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:16:20 crc kubenswrapper[4820]: E0221 08:16:20.697504 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:16:26 crc kubenswrapper[4820]: E0221 08:16:26.556993 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8419493e1fd846703d277695e03fc5eb" Feb 21 08:16:26 crc kubenswrapper[4820]: E0221 08:16:26.557726 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8419493e1fd846703d277695e03fc5eb" Feb 21 08:16:26 crc kubenswrapper[4820]: E0221 08:16:26.557880 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8419493e1fd846703d277695e03fc5eb,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9x56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-6fhr4_openstack(918975eb-d5b2-4b0e-9b35-36e92f03527b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 08:16:26 crc kubenswrapper[4820]: E0221 08:16:26.559357 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-6fhr4" podUID="918975eb-d5b2-4b0e-9b35-36e92f03527b" Feb 21 08:16:27 crc kubenswrapper[4820]: E0221 08:16:27.219681 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8419493e1fd846703d277695e03fc5eb\\\"\"" pod="openstack/cinder-db-sync-6fhr4" podUID="918975eb-d5b2-4b0e-9b35-36e92f03527b" Feb 21 08:16:34 crc kubenswrapper[4820]: I0221 08:16:34.697135 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:16:34 crc kubenswrapper[4820]: E0221 08:16:34.697983 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:16:43 crc kubenswrapper[4820]: I0221 08:16:43.344279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fhr4" event={"ID":"918975eb-d5b2-4b0e-9b35-36e92f03527b","Type":"ContainerStarted","Data":"396aa495a2b94c68ded63dc96a4fdc14015bda68ab667126a1a74b0cac6ba50e"} Feb 21 08:16:43 crc kubenswrapper[4820]: I0221 08:16:43.365350 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-6fhr4" podStartSLOduration=2.376065914 podStartE2EDuration="38.365327897s" podCreationTimestamp="2026-02-21 08:16:05 +0000 UTC" firstStartedPulling="2026-02-21 08:16:05.901051385 +0000 UTC m=+5340.934135573" lastFinishedPulling="2026-02-21 08:16:41.890313358 +0000 UTC m=+5376.923397556" observedRunningTime="2026-02-21 08:16:43.357921836 +0000 UTC m=+5378.391006054" watchObservedRunningTime="2026-02-21 08:16:43.365327897 +0000 UTC m=+5378.398412095" Feb 21 08:16:45 crc kubenswrapper[4820]: I0221 08:16:45.359631 4820 generic.go:334] "Generic (PLEG): container finished" podID="918975eb-d5b2-4b0e-9b35-36e92f03527b" containerID="396aa495a2b94c68ded63dc96a4fdc14015bda68ab667126a1a74b0cac6ba50e" exitCode=0 Feb 21 08:16:45 crc kubenswrapper[4820]: I0221 08:16:45.360048 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fhr4" event={"ID":"918975eb-d5b2-4b0e-9b35-36e92f03527b","Type":"ContainerDied","Data":"396aa495a2b94c68ded63dc96a4fdc14015bda68ab667126a1a74b0cac6ba50e"} Feb 21 08:16:45 crc kubenswrapper[4820]: I0221 08:16:45.702864 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:16:45 crc kubenswrapper[4820]: E0221 08:16:45.703106 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.700285 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.801298 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918975eb-d5b2-4b0e-9b35-36e92f03527b-etc-machine-id\") pod \"918975eb-d5b2-4b0e-9b35-36e92f03527b\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.801419 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-db-sync-config-data\") pod \"918975eb-d5b2-4b0e-9b35-36e92f03527b\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.801443 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-config-data\") pod \"918975eb-d5b2-4b0e-9b35-36e92f03527b\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.801491 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9x56\" (UniqueName: \"kubernetes.io/projected/918975eb-d5b2-4b0e-9b35-36e92f03527b-kube-api-access-q9x56\") pod \"918975eb-d5b2-4b0e-9b35-36e92f03527b\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.801602 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-combined-ca-bundle\") pod \"918975eb-d5b2-4b0e-9b35-36e92f03527b\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.801629 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-scripts\") pod \"918975eb-d5b2-4b0e-9b35-36e92f03527b\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.803414 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/918975eb-d5b2-4b0e-9b35-36e92f03527b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "918975eb-d5b2-4b0e-9b35-36e92f03527b" (UID: "918975eb-d5b2-4b0e-9b35-36e92f03527b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.808035 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918975eb-d5b2-4b0e-9b35-36e92f03527b-kube-api-access-q9x56" (OuterVolumeSpecName: "kube-api-access-q9x56") pod "918975eb-d5b2-4b0e-9b35-36e92f03527b" (UID: "918975eb-d5b2-4b0e-9b35-36e92f03527b"). InnerVolumeSpecName "kube-api-access-q9x56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.808121 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-scripts" (OuterVolumeSpecName: "scripts") pod "918975eb-d5b2-4b0e-9b35-36e92f03527b" (UID: "918975eb-d5b2-4b0e-9b35-36e92f03527b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.808581 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "918975eb-d5b2-4b0e-9b35-36e92f03527b" (UID: "918975eb-d5b2-4b0e-9b35-36e92f03527b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.834673 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "918975eb-d5b2-4b0e-9b35-36e92f03527b" (UID: "918975eb-d5b2-4b0e-9b35-36e92f03527b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.857075 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-config-data" (OuterVolumeSpecName: "config-data") pod "918975eb-d5b2-4b0e-9b35-36e92f03527b" (UID: "918975eb-d5b2-4b0e-9b35-36e92f03527b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.904110 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.904150 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918975eb-d5b2-4b0e-9b35-36e92f03527b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.904165 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.904177 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.904190 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9x56\" (UniqueName: \"kubernetes.io/projected/918975eb-d5b2-4b0e-9b35-36e92f03527b-kube-api-access-q9x56\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.904202 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.376667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fhr4" event={"ID":"918975eb-d5b2-4b0e-9b35-36e92f03527b","Type":"ContainerDied","Data":"7f522b302e0549e17af1e68fc657579d38aa65e21c0e8afd908be171ed725a44"} Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.376699 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.376736 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f522b302e0549e17af1e68fc657579d38aa65e21c0e8afd908be171ed725a44" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.733373 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98b448c79-xx42c"] Feb 21 08:16:47 crc kubenswrapper[4820]: E0221 08:16:47.734003 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918975eb-d5b2-4b0e-9b35-36e92f03527b" containerName="cinder-db-sync" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.734016 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="918975eb-d5b2-4b0e-9b35-36e92f03527b" containerName="cinder-db-sync" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.734181 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="918975eb-d5b2-4b0e-9b35-36e92f03527b" containerName="cinder-db-sync" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.738227 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.768160 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98b448c79-xx42c"] Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.817766 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2shsl\" (UniqueName: \"kubernetes.io/projected/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-kube-api-access-2shsl\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.817824 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-sb\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.817907 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-nb\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.818334 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-config\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.818498 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-dns-svc\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.883434 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.884821 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.887118 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.887133 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.891514 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.892480 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tvzc8" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.906087 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.919500 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-config\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.919571 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-dns-svc\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.919627 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2shsl\" (UniqueName: \"kubernetes.io/projected/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-kube-api-access-2shsl\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.919651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-sb\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.919667 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-nb\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.920700 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-config\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.924798 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-nb\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.924978 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-dns-svc\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.926808 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-sb\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.946116 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2shsl\" (UniqueName: \"kubernetes.io/projected/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-kube-api-access-2shsl\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.021117 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.021187 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-scripts\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.021215 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data-custom\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.021265 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.021296 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkjt9\" (UniqueName: \"kubernetes.io/projected/195a168a-1e3f-4880-a8b8-a74c58b8adad-kube-api-access-gkjt9\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.021358 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195a168a-1e3f-4880-a8b8-a74c58b8adad-logs\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.021381 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195a168a-1e3f-4880-a8b8-a74c58b8adad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.064764 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.122651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.122726 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkjt9\" (UniqueName: \"kubernetes.io/projected/195a168a-1e3f-4880-a8b8-a74c58b8adad-kube-api-access-gkjt9\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.122811 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195a168a-1e3f-4880-a8b8-a74c58b8adad-logs\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.122839 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195a168a-1e3f-4880-a8b8-a74c58b8adad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.122917 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.122957 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-scripts\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.122990 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data-custom\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.125709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195a168a-1e3f-4880-a8b8-a74c58b8adad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.127008 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195a168a-1e3f-4880-a8b8-a74c58b8adad-logs\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.128403 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data-custom\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.129205 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.129707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.130064 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-scripts\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.144038 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkjt9\" (UniqueName: \"kubernetes.io/projected/195a168a-1e3f-4880-a8b8-a74c58b8adad-kube-api-access-gkjt9\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.199666 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.606713 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98b448c79-xx42c"] Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.803754 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:16:49 crc kubenswrapper[4820]: I0221 08:16:49.415524 4820 generic.go:334] "Generic (PLEG): container finished" podID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerID="00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b" exitCode=0 Feb 21 08:16:49 crc kubenswrapper[4820]: I0221 08:16:49.416005 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98b448c79-xx42c" event={"ID":"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d","Type":"ContainerDied","Data":"00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b"} Feb 21 08:16:49 crc kubenswrapper[4820]: I0221 08:16:49.416046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98b448c79-xx42c" event={"ID":"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d","Type":"ContainerStarted","Data":"147651eae6f2d4d4506345601d2cf298cfe763874e04c3aa44b45feb488eb2f6"} Feb 21 08:16:49 crc kubenswrapper[4820]: I0221 08:16:49.419864 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195a168a-1e3f-4880-a8b8-a74c58b8adad","Type":"ContainerStarted","Data":"2d11fa328973af76ce6736058e8404d7bf1916d12ea6f6adb1ab2cb8b5681fe6"} Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.031175 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.462799 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98b448c79-xx42c" event={"ID":"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d","Type":"ContainerStarted","Data":"6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30"} Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.464311 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.470520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195a168a-1e3f-4880-a8b8-a74c58b8adad","Type":"ContainerStarted","Data":"1616e9b30f29f4e34993aeddcfadecb67139c974d297d46c974496e22c415373"} Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.470579 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195a168a-1e3f-4880-a8b8-a74c58b8adad","Type":"ContainerStarted","Data":"3b68c218ee4231985f5230cf5e6e2b3a0881329794e510c46103e1621fc1be99"} Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.470610 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api-log" containerID="cri-o://3b68c218ee4231985f5230cf5e6e2b3a0881329794e510c46103e1621fc1be99" gracePeriod=30 Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.470627 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.470662 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api" containerID="cri-o://1616e9b30f29f4e34993aeddcfadecb67139c974d297d46c974496e22c415373" gracePeriod=30 Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.506112 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.506092423 podStartE2EDuration="3.506092423s" podCreationTimestamp="2026-02-21 08:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:16:50.503113023 +0000 UTC m=+5385.536197221" watchObservedRunningTime="2026-02-21 08:16:50.506092423 +0000 UTC m=+5385.539176611" Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.515442 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98b448c79-xx42c" podStartSLOduration=3.515415075 podStartE2EDuration="3.515415075s" podCreationTimestamp="2026-02-21 08:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:16:50.487930202 +0000 UTC m=+5385.521014440" watchObservedRunningTime="2026-02-21 08:16:50.515415075 +0000 UTC m=+5385.548499273" Feb 21 08:16:51 crc kubenswrapper[4820]: I0221 08:16:51.480574 4820 generic.go:334] "Generic (PLEG): container finished" podID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerID="3b68c218ee4231985f5230cf5e6e2b3a0881329794e510c46103e1621fc1be99" exitCode=143 Feb 21 08:16:51 crc kubenswrapper[4820]: I0221 08:16:51.480657 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195a168a-1e3f-4880-a8b8-a74c58b8adad","Type":"ContainerDied","Data":"3b68c218ee4231985f5230cf5e6e2b3a0881329794e510c46103e1621fc1be99"} Feb 21 08:16:58 crc kubenswrapper[4820]: I0221 08:16:58.067087 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:58 crc kubenswrapper[4820]: I0221 08:16:58.124013 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dfcbcff-8zx84"] Feb 21 08:16:58 crc kubenswrapper[4820]: I0221 08:16:58.124319 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" podUID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerName="dnsmasq-dns" containerID="cri-o://b0e87fab33e7400789b46a574269ce095e4c5e8100d4eb2abda1c5d023d41eb0" gracePeriod=10 Feb 21 08:16:58 crc kubenswrapper[4820]: I0221 08:16:58.533154 4820 generic.go:334] "Generic (PLEG): container finished" podID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerID="b0e87fab33e7400789b46a574269ce095e4c5e8100d4eb2abda1c5d023d41eb0" exitCode=0 Feb 21 08:16:58 crc kubenswrapper[4820]: I0221 08:16:58.533198 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" event={"ID":"d72eaa53-54ec-46af-91c3-fcf248385b34","Type":"ContainerDied","Data":"b0e87fab33e7400789b46a574269ce095e4c5e8100d4eb2abda1c5d023d41eb0"} Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.179441 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.216116 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-nb\") pod \"d72eaa53-54ec-46af-91c3-fcf248385b34\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.216224 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-sb\") pod \"d72eaa53-54ec-46af-91c3-fcf248385b34\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.216343 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-config\") pod \"d72eaa53-54ec-46af-91c3-fcf248385b34\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.216400 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mw5d\" (UniqueName: \"kubernetes.io/projected/d72eaa53-54ec-46af-91c3-fcf248385b34-kube-api-access-4mw5d\") pod \"d72eaa53-54ec-46af-91c3-fcf248385b34\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.216424 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-dns-svc\") pod \"d72eaa53-54ec-46af-91c3-fcf248385b34\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.240459 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d72eaa53-54ec-46af-91c3-fcf248385b34-kube-api-access-4mw5d" (OuterVolumeSpecName: "kube-api-access-4mw5d") pod "d72eaa53-54ec-46af-91c3-fcf248385b34" (UID: "d72eaa53-54ec-46af-91c3-fcf248385b34"). InnerVolumeSpecName "kube-api-access-4mw5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.270747 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d72eaa53-54ec-46af-91c3-fcf248385b34" (UID: "d72eaa53-54ec-46af-91c3-fcf248385b34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.281561 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-config" (OuterVolumeSpecName: "config") pod "d72eaa53-54ec-46af-91c3-fcf248385b34" (UID: "d72eaa53-54ec-46af-91c3-fcf248385b34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.291451 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d72eaa53-54ec-46af-91c3-fcf248385b34" (UID: "d72eaa53-54ec-46af-91c3-fcf248385b34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.297005 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d72eaa53-54ec-46af-91c3-fcf248385b34" (UID: "d72eaa53-54ec-46af-91c3-fcf248385b34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.318035 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.318068 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.318078 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mw5d\" (UniqueName: \"kubernetes.io/projected/d72eaa53-54ec-46af-91c3-fcf248385b34-kube-api-access-4mw5d\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.318090 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.318100 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.544045 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" event={"ID":"d72eaa53-54ec-46af-91c3-fcf248385b34","Type":"ContainerDied","Data":"2f99dfc7ad3da5248fb62aa73368b7c0f9aa9159b1578a6a81f20720fbce41ff"} Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.544077 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.544138 4820 scope.go:117] "RemoveContainer" containerID="b0e87fab33e7400789b46a574269ce095e4c5e8100d4eb2abda1c5d023d41eb0" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.574841 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dfcbcff-8zx84"] Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.575339 4820 scope.go:117] "RemoveContainer" containerID="093a84d21a6636251da79290a491d1bbf076f8c343441c7e7b5d8f0efd814896" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.581250 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dfcbcff-8zx84"] Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.697409 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:16:59 crc kubenswrapper[4820]: E0221 08:16:59.698040 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.730322 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d72eaa53-54ec-46af-91c3-fcf248385b34" path="/var/lib/kubelet/pods/d72eaa53-54ec-46af-91c3-fcf248385b34/volumes" Feb 21 08:17:00 crc kubenswrapper[4820]: I0221 08:17:00.187342 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 21 08:17:10 crc kubenswrapper[4820]: I0221 08:17:10.697325 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:17:10 crc kubenswrapper[4820]: E0221 08:17:10.698498 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:17:20 crc kubenswrapper[4820]: I0221 08:17:20.726120 4820 generic.go:334] "Generic (PLEG): container finished" podID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerID="1616e9b30f29f4e34993aeddcfadecb67139c974d297d46c974496e22c415373" exitCode=137 Feb 21 08:17:20 crc kubenswrapper[4820]: I0221 08:17:20.726296 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195a168a-1e3f-4880-a8b8-a74c58b8adad","Type":"ContainerDied","Data":"1616e9b30f29f4e34993aeddcfadecb67139c974d297d46c974496e22c415373"} Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.485113 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.637859 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkjt9\" (UniqueName: \"kubernetes.io/projected/195a168a-1e3f-4880-a8b8-a74c58b8adad-kube-api-access-gkjt9\") pod \"195a168a-1e3f-4880-a8b8-a74c58b8adad\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.637960 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195a168a-1e3f-4880-a8b8-a74c58b8adad-etc-machine-id\") pod \"195a168a-1e3f-4880-a8b8-a74c58b8adad\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.637985 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-combined-ca-bundle\") pod \"195a168a-1e3f-4880-a8b8-a74c58b8adad\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.638031 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data-custom\") pod \"195a168a-1e3f-4880-a8b8-a74c58b8adad\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.638080 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/195a168a-1e3f-4880-a8b8-a74c58b8adad-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "195a168a-1e3f-4880-a8b8-a74c58b8adad" (UID: "195a168a-1e3f-4880-a8b8-a74c58b8adad"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.638112 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195a168a-1e3f-4880-a8b8-a74c58b8adad-logs\") pod \"195a168a-1e3f-4880-a8b8-a74c58b8adad\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.638151 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-scripts\") pod \"195a168a-1e3f-4880-a8b8-a74c58b8adad\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.638178 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data\") pod \"195a168a-1e3f-4880-a8b8-a74c58b8adad\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.638727 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/195a168a-1e3f-4880-a8b8-a74c58b8adad-logs" (OuterVolumeSpecName: "logs") pod "195a168a-1e3f-4880-a8b8-a74c58b8adad" (UID: "195a168a-1e3f-4880-a8b8-a74c58b8adad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.639071 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195a168a-1e3f-4880-a8b8-a74c58b8adad-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.639128 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195a168a-1e3f-4880-a8b8-a74c58b8adad-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.643969 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "195a168a-1e3f-4880-a8b8-a74c58b8adad" (UID: "195a168a-1e3f-4880-a8b8-a74c58b8adad"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.645151 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195a168a-1e3f-4880-a8b8-a74c58b8adad-kube-api-access-gkjt9" (OuterVolumeSpecName: "kube-api-access-gkjt9") pod "195a168a-1e3f-4880-a8b8-a74c58b8adad" (UID: "195a168a-1e3f-4880-a8b8-a74c58b8adad"). InnerVolumeSpecName "kube-api-access-gkjt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.645698 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-scripts" (OuterVolumeSpecName: "scripts") pod "195a168a-1e3f-4880-a8b8-a74c58b8adad" (UID: "195a168a-1e3f-4880-a8b8-a74c58b8adad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.663103 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "195a168a-1e3f-4880-a8b8-a74c58b8adad" (UID: "195a168a-1e3f-4880-a8b8-a74c58b8adad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.681141 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data" (OuterVolumeSpecName: "config-data") pod "195a168a-1e3f-4880-a8b8-a74c58b8adad" (UID: "195a168a-1e3f-4880-a8b8-a74c58b8adad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.736716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195a168a-1e3f-4880-a8b8-a74c58b8adad","Type":"ContainerDied","Data":"2d11fa328973af76ce6736058e8404d7bf1916d12ea6f6adb1ab2cb8b5681fe6"} Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.736799 4820 scope.go:117] "RemoveContainer" containerID="1616e9b30f29f4e34993aeddcfadecb67139c974d297d46c974496e22c415373" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.736832 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.742891 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkjt9\" (UniqueName: \"kubernetes.io/projected/195a168a-1e3f-4880-a8b8-a74c58b8adad-kube-api-access-gkjt9\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.742929 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.742944 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.742958 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.742969 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.774589 4820 scope.go:117] "RemoveContainer" containerID="3b68c218ee4231985f5230cf5e6e2b3a0881329794e510c46103e1621fc1be99" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.782903 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.795104 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.810824 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:21 crc kubenswrapper[4820]: E0221 08:17:21.811200 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerName="init" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.811220 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerName="init" Feb 21 08:17:21 crc kubenswrapper[4820]: E0221 08:17:21.811253 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.811260 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api" Feb 21 08:17:21 crc kubenswrapper[4820]: E0221 08:17:21.811276 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api-log" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.811282 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api-log" Feb 21 08:17:21 crc kubenswrapper[4820]: E0221 08:17:21.811297 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerName="dnsmasq-dns" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.811302 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerName="dnsmasq-dns" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.811461 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.811472 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api-log" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.811485 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerName="dnsmasq-dns" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.812451 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.821394 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.821610 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.821991 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.822106 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tvzc8" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.822120 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.823843 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.825802 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.844258 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61adc77-1750-4151-8591-10ba08713538-logs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.844324 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.844351 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-scripts\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.844375 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.844398 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data-custom\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.844425 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pwfd\" (UniqueName: \"kubernetes.io/projected/d61adc77-1750-4151-8591-10ba08713538-kube-api-access-7pwfd\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.844465 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.845475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.845577 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d61adc77-1750-4151-8591-10ba08713538-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.947765 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pwfd\" (UniqueName: \"kubernetes.io/projected/d61adc77-1750-4151-8591-10ba08713538-kube-api-access-7pwfd\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.948358 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.948682 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.948910 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d61adc77-1750-4151-8591-10ba08713538-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.948984 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d61adc77-1750-4151-8591-10ba08713538-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.949147 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61adc77-1750-4151-8591-10ba08713538-logs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.949317 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.949499 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-scripts\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.950988 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.949866 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61adc77-1750-4151-8591-10ba08713538-logs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.952078 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data-custom\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.953091 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.953207 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.953458 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.954462 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-scripts\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.954817 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.955906 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data-custom\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.968336 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pwfd\" (UniqueName: \"kubernetes.io/projected/d61adc77-1750-4151-8591-10ba08713538-kube-api-access-7pwfd\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:22 crc kubenswrapper[4820]: I0221 08:17:22.131354 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:22 crc kubenswrapper[4820]: I0221 08:17:22.374287 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:22 crc kubenswrapper[4820]: I0221 08:17:22.697059 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:17:22 crc kubenswrapper[4820]: E0221 08:17:22.697334 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:17:22 crc kubenswrapper[4820]: I0221 08:17:22.745327 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d61adc77-1750-4151-8591-10ba08713538","Type":"ContainerStarted","Data":"3267b575fe33eb154cb252c0c5802dd5fb1d40c51fa9f2b771a061f7ac089e05"} Feb 21 08:17:23 crc kubenswrapper[4820]: I0221 08:17:23.707471 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" path="/var/lib/kubelet/pods/195a168a-1e3f-4880-a8b8-a74c58b8adad/volumes" Feb 21 08:17:23 crc kubenswrapper[4820]: I0221 08:17:23.758072 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d61adc77-1750-4151-8591-10ba08713538","Type":"ContainerStarted","Data":"8bbf6609d10d5dc93e4375f88a256ff522aabe92d5067d549fe9461f4682742d"} Feb 21 08:17:23 crc kubenswrapper[4820]: I0221 08:17:23.758128 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d61adc77-1750-4151-8591-10ba08713538","Type":"ContainerStarted","Data":"93a9d17751d74c3e0a7b09cfcab6ac4dbe840d40b66c20ac5fc0464d9e678888"} Feb 21 08:17:23 crc kubenswrapper[4820]: I0221 08:17:23.758210 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 21 08:17:23 crc kubenswrapper[4820]: I0221 08:17:23.779671 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.779648947 podStartE2EDuration="2.779648947s" podCreationTimestamp="2026-02-21 08:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:17:23.77788444 +0000 UTC m=+5418.810968638" watchObservedRunningTime="2026-02-21 08:17:23.779648947 +0000 UTC m=+5418.812733155" Feb 21 08:17:26 crc kubenswrapper[4820]: I0221 08:17:26.570600 4820 scope.go:117] "RemoveContainer" containerID="1b8f99fcda2042506493b66359457c8391b7f432d8588bbaf5a6223727d8c557" Feb 21 08:17:33 crc kubenswrapper[4820]: I0221 08:17:33.696869 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:17:33 crc kubenswrapper[4820]: E0221 08:17:33.697668 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:17:34 crc kubenswrapper[4820]: I0221 08:17:34.395858 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 21 08:17:46 crc kubenswrapper[4820]: I0221 08:17:46.696917 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:17:46 crc kubenswrapper[4820]: E0221 08:17:46.697640 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.417381 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.419641 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.423140 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.445536 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.454131 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh4mh\" (UniqueName: \"kubernetes.io/projected/c820835c-1414-4968-9832-7987b99d05fc-kube-api-access-lh4mh\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.454191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-scripts\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.454221 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.454387 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c820835c-1414-4968-9832-7987b99d05fc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.454504 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.454780 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.557120 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh4mh\" (UniqueName: \"kubernetes.io/projected/c820835c-1414-4968-9832-7987b99d05fc-kube-api-access-lh4mh\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.557204 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-scripts\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.557228 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.557279 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c820835c-1414-4968-9832-7987b99d05fc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.557329 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.557364 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.557690 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c820835c-1414-4968-9832-7987b99d05fc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.565898 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.566430 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.567976 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-scripts\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.568033 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.581512 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh4mh\" (UniqueName: \"kubernetes.io/projected/c820835c-1414-4968-9832-7987b99d05fc-kube-api-access-lh4mh\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.739165 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 08:17:53 crc kubenswrapper[4820]: I0221 08:17:53.183407 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:17:53 crc kubenswrapper[4820]: I0221 08:17:53.194134 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:17:53 crc kubenswrapper[4820]: I0221 08:17:53.802004 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:53 crc kubenswrapper[4820]: I0221 08:17:53.802506 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api-log" containerID="cri-o://93a9d17751d74c3e0a7b09cfcab6ac4dbe840d40b66c20ac5fc0464d9e678888" gracePeriod=30 Feb 21 08:17:53 crc kubenswrapper[4820]: I0221 08:17:53.802607 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api" containerID="cri-o://8bbf6609d10d5dc93e4375f88a256ff522aabe92d5067d549fe9461f4682742d" gracePeriod=30 Feb 21 08:17:54 crc kubenswrapper[4820]: I0221 08:17:54.056740 4820 generic.go:334] "Generic (PLEG): container finished" podID="d61adc77-1750-4151-8591-10ba08713538" containerID="93a9d17751d74c3e0a7b09cfcab6ac4dbe840d40b66c20ac5fc0464d9e678888" exitCode=143 Feb 21 08:17:54 crc kubenswrapper[4820]: I0221 08:17:54.056873 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d61adc77-1750-4151-8591-10ba08713538","Type":"ContainerDied","Data":"93a9d17751d74c3e0a7b09cfcab6ac4dbe840d40b66c20ac5fc0464d9e678888"} Feb 21 08:17:54 crc kubenswrapper[4820]: I0221 08:17:54.060629 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c820835c-1414-4968-9832-7987b99d05fc","Type":"ContainerStarted","Data":"5624406c32ba41e07bb32ce7d2cd8141f5f7956b50a9b42d7db5ce9a49ade7fc"} Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.072614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c820835c-1414-4968-9832-7987b99d05fc","Type":"ContainerStarted","Data":"4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38"} Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.072950 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c820835c-1414-4968-9832-7987b99d05fc","Type":"ContainerStarted","Data":"47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e"} Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.093633 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.837772618 podStartE2EDuration="3.093611561s" podCreationTimestamp="2026-02-21 08:17:52 +0000 UTC" firstStartedPulling="2026-02-21 08:17:53.193805368 +0000 UTC m=+5448.226889566" lastFinishedPulling="2026-02-21 08:17:53.449644311 +0000 UTC m=+5448.482728509" observedRunningTime="2026-02-21 08:17:55.087784324 +0000 UTC m=+5450.120868522" watchObservedRunningTime="2026-02-21 08:17:55.093611561 +0000 UTC m=+5450.126695759" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.530025 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-99422"] Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.532065 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.557498 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-99422"] Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.617056 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfgbr\" (UniqueName: \"kubernetes.io/projected/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-kube-api-access-rfgbr\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.617164 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-catalog-content\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.617292 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-utilities\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.720282 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfgbr\" (UniqueName: \"kubernetes.io/projected/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-kube-api-access-rfgbr\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.720390 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-catalog-content\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.720461 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-utilities\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.721418 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-utilities\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.723064 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-catalog-content\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.750705 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfgbr\" (UniqueName: \"kubernetes.io/projected/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-kube-api-access-rfgbr\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.872004 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:56 crc kubenswrapper[4820]: I0221 08:17:56.506723 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-99422"] Feb 21 08:17:56 crc kubenswrapper[4820]: W0221 08:17:56.532478 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7c352da_ee5d_4dc2_b5b0_ba5e0e29e272.slice/crio-1feb313f601c9a162e1860e43c020b33377995b4e9bdb70862516fbdba9a04af WatchSource:0}: Error finding container 1feb313f601c9a162e1860e43c020b33377995b4e9bdb70862516fbdba9a04af: Status 404 returned error can't find the container with id 1feb313f601c9a162e1860e43c020b33377995b4e9bdb70862516fbdba9a04af Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.101778 4820 generic.go:334] "Generic (PLEG): container finished" podID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerID="bea8d0f9f9b315282e1c70c38ae69a741bedc2ad00aa6536f29e1f2864f5b481" exitCode=0 Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.101883 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99422" event={"ID":"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272","Type":"ContainerDied","Data":"bea8d0f9f9b315282e1c70c38ae69a741bedc2ad00aa6536f29e1f2864f5b481"} Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.102220 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99422" event={"ID":"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272","Type":"ContainerStarted","Data":"1feb313f601c9a162e1860e43c020b33377995b4e9bdb70862516fbdba9a04af"} Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.104797 4820 generic.go:334] "Generic (PLEG): container finished" podID="d61adc77-1750-4151-8591-10ba08713538" containerID="8bbf6609d10d5dc93e4375f88a256ff522aabe92d5067d549fe9461f4682742d" exitCode=0 Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.104863 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d61adc77-1750-4151-8591-10ba08713538","Type":"ContainerDied","Data":"8bbf6609d10d5dc93e4375f88a256ff522aabe92d5067d549fe9461f4682742d"} Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.132735 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.1.53:8776/healthcheck\": dial tcp 10.217.1.53:8776: connect: connection refused" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.373839 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.448500 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d61adc77-1750-4151-8591-10ba08713538-etc-machine-id\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.448572 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-public-tls-certs\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.448621 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-internal-tls-certs\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.448655 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d61adc77-1750-4151-8591-10ba08713538-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.448680 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61adc77-1750-4151-8591-10ba08713538-logs\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.448836 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-scripts\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.448931 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data-custom\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.449230 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d61adc77-1750-4151-8591-10ba08713538-logs" (OuterVolumeSpecName: "logs") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.449475 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-combined-ca-bundle\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.449562 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pwfd\" (UniqueName: \"kubernetes.io/projected/d61adc77-1750-4151-8591-10ba08713538-kube-api-access-7pwfd\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.449676 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.450506 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d61adc77-1750-4151-8591-10ba08713538-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.450530 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61adc77-1750-4151-8591-10ba08713538-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.460623 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-scripts" (OuterVolumeSpecName: "scripts") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.460760 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.460765 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61adc77-1750-4151-8591-10ba08713538-kube-api-access-7pwfd" (OuterVolumeSpecName: "kube-api-access-7pwfd") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "kube-api-access-7pwfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.479897 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.528913 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.529062 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.536807 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data" (OuterVolumeSpecName: "config-data") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.551989 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.552019 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.552031 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.552039 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pwfd\" (UniqueName: \"kubernetes.io/projected/d61adc77-1750-4151-8591-10ba08713538-kube-api-access-7pwfd\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.552048 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.552058 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.552066 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.697152 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:17:57 crc kubenswrapper[4820]: E0221 08:17:57.697482 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.740128 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.122361 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99422" event={"ID":"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272","Type":"ContainerStarted","Data":"e558332901bffebbfbc746735e9388aca5717767185259bcf6d46e6712d87f76"} Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.124529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d61adc77-1750-4151-8591-10ba08713538","Type":"ContainerDied","Data":"3267b575fe33eb154cb252c0c5802dd5fb1d40c51fa9f2b771a061f7ac089e05"} Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.124560 4820 scope.go:117] "RemoveContainer" containerID="8bbf6609d10d5dc93e4375f88a256ff522aabe92d5067d549fe9461f4682742d" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.124677 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.147042 4820 scope.go:117] "RemoveContainer" containerID="93a9d17751d74c3e0a7b09cfcab6ac4dbe840d40b66c20ac5fc0464d9e678888" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.184876 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.192381 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.203277 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:58 crc kubenswrapper[4820]: E0221 08:17:58.203944 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api-log" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.204030 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api-log" Feb 21 08:17:58 crc kubenswrapper[4820]: E0221 08:17:58.204106 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.204163 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.204390 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api-log" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.204459 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.217575 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.221309 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.221473 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.221606 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.232914 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.273146 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-config-data-custom\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.273393 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.273547 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzc7d\" (UniqueName: \"kubernetes.io/projected/a23af3b4-b486-43b2-b02c-da7b8937e091-kube-api-access-hzc7d\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.273701 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.273888 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.274027 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a23af3b4-b486-43b2-b02c-da7b8937e091-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.274696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-config-data\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.275592 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a23af3b4-b486-43b2-b02c-da7b8937e091-logs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.275784 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-scripts\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377122 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377191 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a23af3b4-b486-43b2-b02c-da7b8937e091-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377207 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-config-data\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377259 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a23af3b4-b486-43b2-b02c-da7b8937e091-logs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377283 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-scripts\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377354 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-config-data-custom\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377375 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377411 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzc7d\" (UniqueName: \"kubernetes.io/projected/a23af3b4-b486-43b2-b02c-da7b8937e091-kube-api-access-hzc7d\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377954 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a23af3b4-b486-43b2-b02c-da7b8937e091-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.378146 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a23af3b4-b486-43b2-b02c-da7b8937e091-logs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.382873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.383557 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-config-data\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.383624 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-scripts\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.384092 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.384107 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.384893 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-config-data-custom\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.394432 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzc7d\" (UniqueName: \"kubernetes.io/projected/a23af3b4-b486-43b2-b02c-da7b8937e091-kube-api-access-hzc7d\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.575665 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.808119 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:59 crc kubenswrapper[4820]: I0221 08:17:59.136217 4820 generic.go:334] "Generic (PLEG): container finished" podID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerID="e558332901bffebbfbc746735e9388aca5717767185259bcf6d46e6712d87f76" exitCode=0 Feb 21 08:17:59 crc kubenswrapper[4820]: I0221 08:17:59.136331 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99422" event={"ID":"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272","Type":"ContainerDied","Data":"e558332901bffebbfbc746735e9388aca5717767185259bcf6d46e6712d87f76"} Feb 21 08:17:59 crc kubenswrapper[4820]: I0221 08:17:59.137883 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a23af3b4-b486-43b2-b02c-da7b8937e091","Type":"ContainerStarted","Data":"9772f4127370e54ae50390f1cc9e9ad76af1eefc742acaee4df20b761c03da80"} Feb 21 08:17:59 crc kubenswrapper[4820]: I0221 08:17:59.712687 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d61adc77-1750-4151-8591-10ba08713538" path="/var/lib/kubelet/pods/d61adc77-1750-4151-8591-10ba08713538/volumes" Feb 21 08:18:00 crc kubenswrapper[4820]: I0221 08:18:00.152052 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99422" event={"ID":"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272","Type":"ContainerStarted","Data":"d0d1bde3b34bfa6572349da9a0486262a6aa493881fab5089df6f0a0db49a44a"} Feb 21 08:18:00 crc kubenswrapper[4820]: I0221 08:18:00.154483 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a23af3b4-b486-43b2-b02c-da7b8937e091","Type":"ContainerStarted","Data":"49624a4ac27dcc18e8f13447613ba1c7177d5ffdbdd4510a0fae4e8ceb6361b7"} Feb 21 08:18:00 crc kubenswrapper[4820]: I0221 08:18:00.154530 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a23af3b4-b486-43b2-b02c-da7b8937e091","Type":"ContainerStarted","Data":"66daafda140763b507277b24604afb2a0ed31073a4413877bf805b48a90f1dc2"} Feb 21 08:18:00 crc kubenswrapper[4820]: I0221 08:18:00.154623 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 21 08:18:00 crc kubenswrapper[4820]: I0221 08:18:00.176395 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-99422" podStartSLOduration=2.656743832 podStartE2EDuration="5.176375744s" podCreationTimestamp="2026-02-21 08:17:55 +0000 UTC" firstStartedPulling="2026-02-21 08:17:57.103879943 +0000 UTC m=+5452.136964141" lastFinishedPulling="2026-02-21 08:17:59.623511855 +0000 UTC m=+5454.656596053" observedRunningTime="2026-02-21 08:18:00.171805311 +0000 UTC m=+5455.204889529" watchObservedRunningTime="2026-02-21 08:18:00.176375744 +0000 UTC m=+5455.209459942" Feb 21 08:18:00 crc kubenswrapper[4820]: I0221 08:18:00.200009 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.199986703 podStartE2EDuration="2.199986703s" podCreationTimestamp="2026-02-21 08:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:18:00.194680729 +0000 UTC m=+5455.227764927" watchObservedRunningTime="2026-02-21 08:18:00.199986703 +0000 UTC m=+5455.233070901" Feb 21 08:18:02 crc kubenswrapper[4820]: I0221 08:18:02.977579 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 21 08:18:03 crc kubenswrapper[4820]: I0221 08:18:03.056946 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:18:03 crc kubenswrapper[4820]: I0221 08:18:03.181106 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="cinder-scheduler" containerID="cri-o://4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38" gracePeriod=30 Feb 21 08:18:03 crc kubenswrapper[4820]: I0221 08:18:03.181435 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="probe" containerID="cri-o://47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e" gracePeriod=30 Feb 21 08:18:04 crc kubenswrapper[4820]: I0221 08:18:04.187528 4820 generic.go:334] "Generic (PLEG): container finished" podID="c820835c-1414-4968-9832-7987b99d05fc" containerID="47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e" exitCode=0 Feb 21 08:18:04 crc kubenswrapper[4820]: I0221 08:18:04.187877 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c820835c-1414-4968-9832-7987b99d05fc","Type":"ContainerDied","Data":"47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e"} Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.770835 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.873273 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.873704 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.917994 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.920694 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v9fdw"] Feb 21 08:18:05 crc kubenswrapper[4820]: E0221 08:18:05.926529 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="probe" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.926578 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="probe" Feb 21 08:18:05 crc kubenswrapper[4820]: E0221 08:18:05.926608 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="cinder-scheduler" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.926617 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="cinder-scheduler" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.927070 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="cinder-scheduler" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.927091 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="probe" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.928645 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.938682 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v9fdw"] Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.939431 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data\") pod \"c820835c-1414-4968-9832-7987b99d05fc\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.939523 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh4mh\" (UniqueName: \"kubernetes.io/projected/c820835c-1414-4968-9832-7987b99d05fc-kube-api-access-lh4mh\") pod \"c820835c-1414-4968-9832-7987b99d05fc\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.939591 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-combined-ca-bundle\") pod \"c820835c-1414-4968-9832-7987b99d05fc\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.939657 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c820835c-1414-4968-9832-7987b99d05fc-etc-machine-id\") pod \"c820835c-1414-4968-9832-7987b99d05fc\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.939702 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data-custom\") pod \"c820835c-1414-4968-9832-7987b99d05fc\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.939746 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-scripts\") pod \"c820835c-1414-4968-9832-7987b99d05fc\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.940430 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c820835c-1414-4968-9832-7987b99d05fc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c820835c-1414-4968-9832-7987b99d05fc" (UID: "c820835c-1414-4968-9832-7987b99d05fc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.962444 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c820835c-1414-4968-9832-7987b99d05fc" (UID: "c820835c-1414-4968-9832-7987b99d05fc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.962570 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c820835c-1414-4968-9832-7987b99d05fc-kube-api-access-lh4mh" (OuterVolumeSpecName: "kube-api-access-lh4mh") pod "c820835c-1414-4968-9832-7987b99d05fc" (UID: "c820835c-1414-4968-9832-7987b99d05fc"). InnerVolumeSpecName "kube-api-access-lh4mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.962666 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-scripts" (OuterVolumeSpecName: "scripts") pod "c820835c-1414-4968-9832-7987b99d05fc" (UID: "c820835c-1414-4968-9832-7987b99d05fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.998624 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c820835c-1414-4968-9832-7987b99d05fc" (UID: "c820835c-1414-4968-9832-7987b99d05fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042219 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-catalog-content\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042319 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c7g2\" (UniqueName: \"kubernetes.io/projected/25afb423-bb97-4560-9e0a-369f39227c3f-kube-api-access-8c7g2\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042357 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-utilities\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042503 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh4mh\" (UniqueName: \"kubernetes.io/projected/c820835c-1414-4968-9832-7987b99d05fc-kube-api-access-lh4mh\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042519 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042531 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c820835c-1414-4968-9832-7987b99d05fc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042543 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042555 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.045923 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data" (OuterVolumeSpecName: "config-data") pod "c820835c-1414-4968-9832-7987b99d05fc" (UID: "c820835c-1414-4968-9832-7987b99d05fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.149599 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-catalog-content\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.149966 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c7g2\" (UniqueName: \"kubernetes.io/projected/25afb423-bb97-4560-9e0a-369f39227c3f-kube-api-access-8c7g2\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.150154 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-utilities\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.150732 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.151270 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-utilities\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.151640 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-catalog-content\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.171081 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c7g2\" (UniqueName: \"kubernetes.io/projected/25afb423-bb97-4560-9e0a-369f39227c3f-kube-api-access-8c7g2\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.205123 4820 generic.go:334] "Generic (PLEG): container finished" podID="c820835c-1414-4968-9832-7987b99d05fc" containerID="4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38" exitCode=0 Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.205164 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c820835c-1414-4968-9832-7987b99d05fc","Type":"ContainerDied","Data":"4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38"} Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.205220 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c820835c-1414-4968-9832-7987b99d05fc","Type":"ContainerDied","Data":"5624406c32ba41e07bb32ce7d2cd8141f5f7956b50a9b42d7db5ce9a49ade7fc"} Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.205249 4820 scope.go:117] "RemoveContainer" containerID="47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.205742 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.233645 4820 scope.go:117] "RemoveContainer" containerID="4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.260035 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.269538 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.274767 4820 scope.go:117] "RemoveContainer" containerID="47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e" Feb 21 08:18:06 crc kubenswrapper[4820]: E0221 08:18:06.277367 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e\": container with ID starting with 47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e not found: ID does not exist" containerID="47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.277434 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e"} err="failed to get container status \"47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e\": rpc error: code = NotFound desc = could not find container \"47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e\": container with ID starting with 47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e not found: ID does not exist" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.277456 4820 scope.go:117] "RemoveContainer" containerID="4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38" Feb 21 08:18:06 crc kubenswrapper[4820]: E0221 08:18:06.278818 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38\": container with ID starting with 4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38 not found: ID does not exist" containerID="4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.278843 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38"} err="failed to get container status \"4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38\": rpc error: code = NotFound desc = could not find container \"4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38\": container with ID starting with 4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38 not found: ID does not exist" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.286057 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.293671 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.294994 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.305620 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.322952 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.339731 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.456328 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.456433 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74kqj\" (UniqueName: \"kubernetes.io/projected/77665b9b-37d6-4277-a75b-e30637b4b269-kube-api-access-74kqj\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.456479 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-config-data\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.456501 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-scripts\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.456530 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77665b9b-37d6-4277-a75b-e30637b4b269-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.456547 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.558023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.558369 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74kqj\" (UniqueName: \"kubernetes.io/projected/77665b9b-37d6-4277-a75b-e30637b4b269-kube-api-access-74kqj\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.558414 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-config-data\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.558438 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-scripts\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.558470 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77665b9b-37d6-4277-a75b-e30637b4b269-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.558489 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.563951 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77665b9b-37d6-4277-a75b-e30637b4b269-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.564130 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-scripts\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.565082 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-config-data\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.577973 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.582802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74kqj\" (UniqueName: \"kubernetes.io/projected/77665b9b-37d6-4277-a75b-e30637b4b269-kube-api-access-74kqj\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.600954 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.619942 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.878839 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v9fdw"] Feb 21 08:18:06 crc kubenswrapper[4820]: W0221 08:18:06.885280 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25afb423_bb97_4560_9e0a_369f39227c3f.slice/crio-7158af20255b2b51084f9a424f285043fe5b8746dff859ce4bb8cdb647a3f5a7 WatchSource:0}: Error finding container 7158af20255b2b51084f9a424f285043fe5b8746dff859ce4bb8cdb647a3f5a7: Status 404 returned error can't find the container with id 7158af20255b2b51084f9a424f285043fe5b8746dff859ce4bb8cdb647a3f5a7 Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.913462 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4qwrz"] Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.915606 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.938887 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qwrz"] Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.056084 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:18:07 crc kubenswrapper[4820]: W0221 08:18:07.061446 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77665b9b_37d6_4277_a75b_e30637b4b269.slice/crio-43fbc8666e63f30fa0c8e906934fc6830b685c395e0486369be71a57e98996e1 WatchSource:0}: Error finding container 43fbc8666e63f30fa0c8e906934fc6830b685c395e0486369be71a57e98996e1: Status 404 returned error can't find the container with id 43fbc8666e63f30fa0c8e906934fc6830b685c395e0486369be71a57e98996e1 Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.065695 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvvfc\" (UniqueName: \"kubernetes.io/projected/a4169ca0-c75e-496a-9d08-a1fe753df974-kube-api-access-pvvfc\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.065899 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-utilities\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.066038 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-catalog-content\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.167504 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvvfc\" (UniqueName: \"kubernetes.io/projected/a4169ca0-c75e-496a-9d08-a1fe753df974-kube-api-access-pvvfc\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.167636 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-utilities\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.167678 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-catalog-content\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.168230 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-utilities\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.168433 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-catalog-content\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.183729 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvvfc\" (UniqueName: \"kubernetes.io/projected/a4169ca0-c75e-496a-9d08-a1fe753df974-kube-api-access-pvvfc\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.215514 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77665b9b-37d6-4277-a75b-e30637b4b269","Type":"ContainerStarted","Data":"43fbc8666e63f30fa0c8e906934fc6830b685c395e0486369be71a57e98996e1"} Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.220787 4820 generic.go:334] "Generic (PLEG): container finished" podID="25afb423-bb97-4560-9e0a-369f39227c3f" containerID="aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426" exitCode=0 Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.220871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9fdw" event={"ID":"25afb423-bb97-4560-9e0a-369f39227c3f","Type":"ContainerDied","Data":"aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426"} Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.220969 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9fdw" event={"ID":"25afb423-bb97-4560-9e0a-369f39227c3f","Type":"ContainerStarted","Data":"7158af20255b2b51084f9a424f285043fe5b8746dff859ce4bb8cdb647a3f5a7"} Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.252192 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.517039 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qwrz"] Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.708190 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c820835c-1414-4968-9832-7987b99d05fc" path="/var/lib/kubelet/pods/c820835c-1414-4968-9832-7987b99d05fc/volumes" Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.279815 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77665b9b-37d6-4277-a75b-e30637b4b269","Type":"ContainerStarted","Data":"da8375ea9f80c1ffa1959adfb8ff214f9098971d2e9e07de236a15a77ca439e7"} Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.297691 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9fdw" event={"ID":"25afb423-bb97-4560-9e0a-369f39227c3f","Type":"ContainerStarted","Data":"53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577"} Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.330709 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-99422"] Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.343993 4820 generic.go:334] "Generic (PLEG): container finished" podID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerID="869e57f4b3b81bc2e213a6194eee05e7623a2b65d138f30982cedbf663949894" exitCode=0 Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.344425 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-99422" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="registry-server" containerID="cri-o://d0d1bde3b34bfa6572349da9a0486262a6aa493881fab5089df6f0a0db49a44a" gracePeriod=2 Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.344499 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qwrz" event={"ID":"a4169ca0-c75e-496a-9d08-a1fe753df974","Type":"ContainerDied","Data":"869e57f4b3b81bc2e213a6194eee05e7623a2b65d138f30982cedbf663949894"} Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.344526 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qwrz" event={"ID":"a4169ca0-c75e-496a-9d08-a1fe753df974","Type":"ContainerStarted","Data":"ba573cf9178ff188dac0401b89f49fc044a73915f950055de346dd0e475d338c"} Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.696602 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:18:08 crc kubenswrapper[4820]: E0221 08:18:08.696967 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:18:09 crc kubenswrapper[4820]: I0221 08:18:09.504963 4820 generic.go:334] "Generic (PLEG): container finished" podID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerID="d0d1bde3b34bfa6572349da9a0486262a6aa493881fab5089df6f0a0db49a44a" exitCode=0 Feb 21 08:18:09 crc kubenswrapper[4820]: I0221 08:18:09.506268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99422" event={"ID":"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272","Type":"ContainerDied","Data":"d0d1bde3b34bfa6572349da9a0486262a6aa493881fab5089df6f0a0db49a44a"} Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.121050 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.194217 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfgbr\" (UniqueName: \"kubernetes.io/projected/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-kube-api-access-rfgbr\") pod \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.194391 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-catalog-content\") pod \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.194469 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-utilities\") pod \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.196376 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-utilities" (OuterVolumeSpecName: "utilities") pod "f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" (UID: "f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.200342 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-kube-api-access-rfgbr" (OuterVolumeSpecName: "kube-api-access-rfgbr") pod "f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" (UID: "f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272"). InnerVolumeSpecName "kube-api-access-rfgbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.296412 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.296710 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfgbr\" (UniqueName: \"kubernetes.io/projected/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-kube-api-access-rfgbr\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.297334 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" (UID: "f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.397983 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.515844 4820 generic.go:334] "Generic (PLEG): container finished" podID="25afb423-bb97-4560-9e0a-369f39227c3f" containerID="53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577" exitCode=0 Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.515929 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9fdw" event={"ID":"25afb423-bb97-4560-9e0a-369f39227c3f","Type":"ContainerDied","Data":"53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577"} Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.519932 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99422" event={"ID":"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272","Type":"ContainerDied","Data":"1feb313f601c9a162e1860e43c020b33377995b4e9bdb70862516fbdba9a04af"} Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.519980 4820 scope.go:117] "RemoveContainer" containerID="d0d1bde3b34bfa6572349da9a0486262a6aa493881fab5089df6f0a0db49a44a" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.520113 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.530100 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77665b9b-37d6-4277-a75b-e30637b4b269","Type":"ContainerStarted","Data":"41e2fd166793b932f0a7fbc5d95846169075bef7a9462777446e71983b472c8f"} Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.559470 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-99422"] Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.572718 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-99422"] Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.576227 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.57620698 podStartE2EDuration="4.57620698s" podCreationTimestamp="2026-02-21 08:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:18:10.565658914 +0000 UTC m=+5465.598743122" watchObservedRunningTime="2026-02-21 08:18:10.57620698 +0000 UTC m=+5465.609291188" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.586549 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 21 08:18:11 crc kubenswrapper[4820]: I0221 08:18:11.621618 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 21 08:18:11 crc kubenswrapper[4820]: I0221 08:18:11.664535 4820 scope.go:117] "RemoveContainer" containerID="e558332901bffebbfbc746735e9388aca5717767185259bcf6d46e6712d87f76" Feb 21 08:18:11 crc kubenswrapper[4820]: I0221 08:18:11.692515 4820 scope.go:117] "RemoveContainer" containerID="bea8d0f9f9b315282e1c70c38ae69a741bedc2ad00aa6536f29e1f2864f5b481" Feb 21 08:18:11 crc kubenswrapper[4820]: I0221 08:18:11.710078 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" path="/var/lib/kubelet/pods/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272/volumes" Feb 21 08:18:13 crc kubenswrapper[4820]: I0221 08:18:13.642612 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9fdw" event={"ID":"25afb423-bb97-4560-9e0a-369f39227c3f","Type":"ContainerStarted","Data":"10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b"} Feb 21 08:18:13 crc kubenswrapper[4820]: I0221 08:18:13.644633 4820 generic.go:334] "Generic (PLEG): container finished" podID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerID="229284d09cd2a69b1c9acdbdb5c342d63af66ac1afb08bc6312d62ae998ec868" exitCode=0 Feb 21 08:18:13 crc kubenswrapper[4820]: I0221 08:18:13.644649 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qwrz" event={"ID":"a4169ca0-c75e-496a-9d08-a1fe753df974","Type":"ContainerDied","Data":"229284d09cd2a69b1c9acdbdb5c342d63af66ac1afb08bc6312d62ae998ec868"} Feb 21 08:18:13 crc kubenswrapper[4820]: I0221 08:18:13.666288 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v9fdw" podStartSLOduration=3.3001947879999998 podStartE2EDuration="8.666270227s" podCreationTimestamp="2026-02-21 08:18:05 +0000 UTC" firstStartedPulling="2026-02-21 08:18:07.223185158 +0000 UTC m=+5462.256269356" lastFinishedPulling="2026-02-21 08:18:12.589260597 +0000 UTC m=+5467.622344795" observedRunningTime="2026-02-21 08:18:13.660912622 +0000 UTC m=+5468.693996820" watchObservedRunningTime="2026-02-21 08:18:13.666270227 +0000 UTC m=+5468.699354425" Feb 21 08:18:14 crc kubenswrapper[4820]: I0221 08:18:14.656541 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qwrz" event={"ID":"a4169ca0-c75e-496a-9d08-a1fe753df974","Type":"ContainerStarted","Data":"41f7bbabb1f4aac78768cdd9f7b87492a53e7441d938d33beceedfbebbdd13ab"} Feb 21 08:18:14 crc kubenswrapper[4820]: I0221 08:18:14.690442 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4qwrz" podStartSLOduration=2.721696783 podStartE2EDuration="8.690424757s" podCreationTimestamp="2026-02-21 08:18:06 +0000 UTC" firstStartedPulling="2026-02-21 08:18:08.347368585 +0000 UTC m=+5463.380452773" lastFinishedPulling="2026-02-21 08:18:14.316096549 +0000 UTC m=+5469.349180747" observedRunningTime="2026-02-21 08:18:14.683909661 +0000 UTC m=+5469.716993869" watchObservedRunningTime="2026-02-21 08:18:14.690424757 +0000 UTC m=+5469.723508955" Feb 21 08:18:16 crc kubenswrapper[4820]: I0221 08:18:16.340924 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:16 crc kubenswrapper[4820]: I0221 08:18:16.341269 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:16 crc kubenswrapper[4820]: I0221 08:18:16.387351 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:16 crc kubenswrapper[4820]: I0221 08:18:16.910357 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 21 08:18:17 crc kubenswrapper[4820]: I0221 08:18:17.252897 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:17 crc kubenswrapper[4820]: I0221 08:18:17.252970 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:18 crc kubenswrapper[4820]: I0221 08:18:18.302969 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4qwrz" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="registry-server" probeResult="failure" output=< Feb 21 08:18:18 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:18:18 crc kubenswrapper[4820]: > Feb 21 08:18:19 crc kubenswrapper[4820]: I0221 08:18:19.698796 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:18:19 crc kubenswrapper[4820]: E0221 08:18:19.699338 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.470889 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mkg7q"] Feb 21 08:18:20 crc kubenswrapper[4820]: E0221 08:18:20.471418 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="extract-utilities" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.471442 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="extract-utilities" Feb 21 08:18:20 crc kubenswrapper[4820]: E0221 08:18:20.471456 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="registry-server" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.471462 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="registry-server" Feb 21 08:18:20 crc kubenswrapper[4820]: E0221 08:18:20.471487 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="extract-content" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.471495 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="extract-content" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.471661 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="registry-server" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.472228 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.490049 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mkg7q"] Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.578136 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9c0c-account-create-update-bf5w2"] Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.583824 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.597025 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.601807 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-operator-scripts\") pod \"glance-db-create-mkg7q\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.601864 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4726\" (UniqueName: \"kubernetes.io/projected/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-kube-api-access-f4726\") pod \"glance-db-create-mkg7q\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.630165 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9c0c-account-create-update-bf5w2"] Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.703581 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-operator-scripts\") pod \"glance-db-create-mkg7q\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.703651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4726\" (UniqueName: \"kubernetes.io/projected/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-kube-api-access-f4726\") pod \"glance-db-create-mkg7q\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.703699 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3214fb7b-d651-4bd3-a75b-a9995693fc60-operator-scripts\") pod \"glance-9c0c-account-create-update-bf5w2\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.703752 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt22q\" (UniqueName: \"kubernetes.io/projected/3214fb7b-d651-4bd3-a75b-a9995693fc60-kube-api-access-zt22q\") pod \"glance-9c0c-account-create-update-bf5w2\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.704401 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-operator-scripts\") pod \"glance-db-create-mkg7q\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.727292 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4726\" (UniqueName: \"kubernetes.io/projected/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-kube-api-access-f4726\") pod \"glance-db-create-mkg7q\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.800379 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.807629 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3214fb7b-d651-4bd3-a75b-a9995693fc60-operator-scripts\") pod \"glance-9c0c-account-create-update-bf5w2\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.808390 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt22q\" (UniqueName: \"kubernetes.io/projected/3214fb7b-d651-4bd3-a75b-a9995693fc60-kube-api-access-zt22q\") pod \"glance-9c0c-account-create-update-bf5w2\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.808980 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3214fb7b-d651-4bd3-a75b-a9995693fc60-operator-scripts\") pod \"glance-9c0c-account-create-update-bf5w2\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.828599 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt22q\" (UniqueName: \"kubernetes.io/projected/3214fb7b-d651-4bd3-a75b-a9995693fc60-kube-api-access-zt22q\") pod \"glance-9c0c-account-create-update-bf5w2\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.918105 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:21 crc kubenswrapper[4820]: I0221 08:18:21.289629 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mkg7q"] Feb 21 08:18:21 crc kubenswrapper[4820]: W0221 08:18:21.293530 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd19c6e2c_81cf_472e_babb_fb9cf7bf052b.slice/crio-99497510b665d4e7bbb309bd521bcce1a7f023e19088b5978dfc65a0d1e7a19f WatchSource:0}: Error finding container 99497510b665d4e7bbb309bd521bcce1a7f023e19088b5978dfc65a0d1e7a19f: Status 404 returned error can't find the container with id 99497510b665d4e7bbb309bd521bcce1a7f023e19088b5978dfc65a0d1e7a19f Feb 21 08:18:21 crc kubenswrapper[4820]: W0221 08:18:21.446460 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3214fb7b_d651_4bd3_a75b_a9995693fc60.slice/crio-e9e19c159d299106bda57dfed77736ce302d0fdce7705a5ee5fa5647398045d1 WatchSource:0}: Error finding container e9e19c159d299106bda57dfed77736ce302d0fdce7705a5ee5fa5647398045d1: Status 404 returned error can't find the container with id e9e19c159d299106bda57dfed77736ce302d0fdce7705a5ee5fa5647398045d1 Feb 21 08:18:21 crc kubenswrapper[4820]: I0221 08:18:21.450257 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9c0c-account-create-update-bf5w2"] Feb 21 08:18:21 crc kubenswrapper[4820]: I0221 08:18:21.733859 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mkg7q" event={"ID":"d19c6e2c-81cf-472e-babb-fb9cf7bf052b","Type":"ContainerStarted","Data":"3e52a366c388477e04648e39ebed9de97e6f940db275bcc2bd5bce85d17a210e"} Feb 21 08:18:21 crc kubenswrapper[4820]: I0221 08:18:21.733912 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mkg7q" event={"ID":"d19c6e2c-81cf-472e-babb-fb9cf7bf052b","Type":"ContainerStarted","Data":"99497510b665d4e7bbb309bd521bcce1a7f023e19088b5978dfc65a0d1e7a19f"} Feb 21 08:18:21 crc kubenswrapper[4820]: I0221 08:18:21.735998 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9c0c-account-create-update-bf5w2" event={"ID":"3214fb7b-d651-4bd3-a75b-a9995693fc60","Type":"ContainerStarted","Data":"e9e19c159d299106bda57dfed77736ce302d0fdce7705a5ee5fa5647398045d1"} Feb 21 08:18:22 crc kubenswrapper[4820]: I0221 08:18:22.745087 4820 generic.go:334] "Generic (PLEG): container finished" podID="d19c6e2c-81cf-472e-babb-fb9cf7bf052b" containerID="3e52a366c388477e04648e39ebed9de97e6f940db275bcc2bd5bce85d17a210e" exitCode=0 Feb 21 08:18:22 crc kubenswrapper[4820]: I0221 08:18:22.745401 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mkg7q" event={"ID":"d19c6e2c-81cf-472e-babb-fb9cf7bf052b","Type":"ContainerDied","Data":"3e52a366c388477e04648e39ebed9de97e6f940db275bcc2bd5bce85d17a210e"} Feb 21 08:18:22 crc kubenswrapper[4820]: I0221 08:18:22.747403 4820 generic.go:334] "Generic (PLEG): container finished" podID="3214fb7b-d651-4bd3-a75b-a9995693fc60" containerID="d615c6eccf115a17b159e3c5aa929268d96702d9d0293e623715649c3ad02f08" exitCode=0 Feb 21 08:18:22 crc kubenswrapper[4820]: I0221 08:18:22.747454 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9c0c-account-create-update-bf5w2" event={"ID":"3214fb7b-d651-4bd3-a75b-a9995693fc60","Type":"ContainerDied","Data":"d615c6eccf115a17b159e3c5aa929268d96702d9d0293e623715649c3ad02f08"} Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.108452 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.122132 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.291076 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-operator-scripts\") pod \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.291196 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4726\" (UniqueName: \"kubernetes.io/projected/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-kube-api-access-f4726\") pod \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.291252 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt22q\" (UniqueName: \"kubernetes.io/projected/3214fb7b-d651-4bd3-a75b-a9995693fc60-kube-api-access-zt22q\") pod \"3214fb7b-d651-4bd3-a75b-a9995693fc60\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.291335 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3214fb7b-d651-4bd3-a75b-a9995693fc60-operator-scripts\") pod \"3214fb7b-d651-4bd3-a75b-a9995693fc60\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.291920 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d19c6e2c-81cf-472e-babb-fb9cf7bf052b" (UID: "d19c6e2c-81cf-472e-babb-fb9cf7bf052b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.292204 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3214fb7b-d651-4bd3-a75b-a9995693fc60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3214fb7b-d651-4bd3-a75b-a9995693fc60" (UID: "3214fb7b-d651-4bd3-a75b-a9995693fc60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.296331 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3214fb7b-d651-4bd3-a75b-a9995693fc60-kube-api-access-zt22q" (OuterVolumeSpecName: "kube-api-access-zt22q") pod "3214fb7b-d651-4bd3-a75b-a9995693fc60" (UID: "3214fb7b-d651-4bd3-a75b-a9995693fc60"). InnerVolumeSpecName "kube-api-access-zt22q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.296424 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-kube-api-access-f4726" (OuterVolumeSpecName: "kube-api-access-f4726") pod "d19c6e2c-81cf-472e-babb-fb9cf7bf052b" (UID: "d19c6e2c-81cf-472e-babb-fb9cf7bf052b"). InnerVolumeSpecName "kube-api-access-f4726". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.393309 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.393353 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4726\" (UniqueName: \"kubernetes.io/projected/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-kube-api-access-f4726\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.393364 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt22q\" (UniqueName: \"kubernetes.io/projected/3214fb7b-d651-4bd3-a75b-a9995693fc60-kube-api-access-zt22q\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.393375 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3214fb7b-d651-4bd3-a75b-a9995693fc60-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.767343 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.767424 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mkg7q" event={"ID":"d19c6e2c-81cf-472e-babb-fb9cf7bf052b","Type":"ContainerDied","Data":"99497510b665d4e7bbb309bd521bcce1a7f023e19088b5978dfc65a0d1e7a19f"} Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.767476 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99497510b665d4e7bbb309bd521bcce1a7f023e19088b5978dfc65a0d1e7a19f" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.769399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9c0c-account-create-update-bf5w2" event={"ID":"3214fb7b-d651-4bd3-a75b-a9995693fc60","Type":"ContainerDied","Data":"e9e19c159d299106bda57dfed77736ce302d0fdce7705a5ee5fa5647398045d1"} Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.769445 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9e19c159d299106bda57dfed77736ce302d0fdce7705a5ee5fa5647398045d1" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.769474 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.753526 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-l4nch"] Feb 21 08:18:25 crc kubenswrapper[4820]: E0221 08:18:25.754291 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3214fb7b-d651-4bd3-a75b-a9995693fc60" containerName="mariadb-account-create-update" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.754309 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3214fb7b-d651-4bd3-a75b-a9995693fc60" containerName="mariadb-account-create-update" Feb 21 08:18:25 crc kubenswrapper[4820]: E0221 08:18:25.754339 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19c6e2c-81cf-472e-babb-fb9cf7bf052b" containerName="mariadb-database-create" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.754347 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19c6e2c-81cf-472e-babb-fb9cf7bf052b" containerName="mariadb-database-create" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.754559 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3214fb7b-d651-4bd3-a75b-a9995693fc60" containerName="mariadb-account-create-update" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.754582 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19c6e2c-81cf-472e-babb-fb9cf7bf052b" containerName="mariadb-database-create" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.755297 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.757111 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mrcwm" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.757384 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.765364 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-l4nch"] Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.920565 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-config-data\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.920622 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-combined-ca-bundle\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.920656 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-db-sync-config-data\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.920719 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlq7v\" (UniqueName: \"kubernetes.io/projected/f9668bc3-af3a-43af-8ead-9cc596776786-kube-api-access-dlq7v\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.022034 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-config-data\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.022086 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-combined-ca-bundle\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.022116 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-db-sync-config-data\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.022163 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlq7v\" (UniqueName: \"kubernetes.io/projected/f9668bc3-af3a-43af-8ead-9cc596776786-kube-api-access-dlq7v\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.026709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-combined-ca-bundle\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.041828 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-db-sync-config-data\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.043424 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-config-data\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.043678 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlq7v\" (UniqueName: \"kubernetes.io/projected/f9668bc3-af3a-43af-8ead-9cc596776786-kube-api-access-dlq7v\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.074155 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.387375 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.432077 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v9fdw"] Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.630364 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-l4nch"] Feb 21 08:18:26 crc kubenswrapper[4820]: W0221 08:18:26.634730 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9668bc3_af3a_43af_8ead_9cc596776786.slice/crio-2e5505f5eee4bd8eb9dc766daa0ad19069fe0297aec6728a729d6c5540679ff7 WatchSource:0}: Error finding container 2e5505f5eee4bd8eb9dc766daa0ad19069fe0297aec6728a729d6c5540679ff7: Status 404 returned error can't find the container with id 2e5505f5eee4bd8eb9dc766daa0ad19069fe0297aec6728a729d6c5540679ff7 Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.784209 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l4nch" event={"ID":"f9668bc3-af3a-43af-8ead-9cc596776786","Type":"ContainerStarted","Data":"2e5505f5eee4bd8eb9dc766daa0ad19069fe0297aec6728a729d6c5540679ff7"} Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.784388 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v9fdw" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="registry-server" containerID="cri-o://10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b" gracePeriod=2 Feb 21 08:18:27 crc kubenswrapper[4820]: E0221 08:18:27.098344 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25afb423_bb97_4560_9e0a_369f39227c3f.slice/crio-conmon-10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b.scope\": RecentStats: unable to find data in memory cache]" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.792788 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.798144 4820 generic.go:334] "Generic (PLEG): container finished" podID="25afb423-bb97-4560-9e0a-369f39227c3f" containerID="10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b" exitCode=0 Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.798181 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.798198 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9fdw" event={"ID":"25afb423-bb97-4560-9e0a-369f39227c3f","Type":"ContainerDied","Data":"10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b"} Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.798274 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9fdw" event={"ID":"25afb423-bb97-4560-9e0a-369f39227c3f","Type":"ContainerDied","Data":"7158af20255b2b51084f9a424f285043fe5b8746dff859ce4bb8cdb647a3f5a7"} Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.798308 4820 scope.go:117] "RemoveContainer" containerID="10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.833944 4820 scope.go:117] "RemoveContainer" containerID="53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.900649 4820 scope.go:117] "RemoveContainer" containerID="aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.941516 4820 scope.go:117] "RemoveContainer" containerID="10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b" Feb 21 08:18:27 crc kubenswrapper[4820]: E0221 08:18:27.942405 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b\": container with ID starting with 10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b not found: ID does not exist" containerID="10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.942447 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b"} err="failed to get container status \"10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b\": rpc error: code = NotFound desc = could not find container \"10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b\": container with ID starting with 10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b not found: ID does not exist" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.942470 4820 scope.go:117] "RemoveContainer" containerID="53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577" Feb 21 08:18:27 crc kubenswrapper[4820]: E0221 08:18:27.942719 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577\": container with ID starting with 53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577 not found: ID does not exist" containerID="53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.942752 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577"} err="failed to get container status \"53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577\": rpc error: code = NotFound desc = could not find container \"53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577\": container with ID starting with 53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577 not found: ID does not exist" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.942773 4820 scope.go:117] "RemoveContainer" containerID="aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426" Feb 21 08:18:27 crc kubenswrapper[4820]: E0221 08:18:27.943316 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426\": container with ID starting with aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426 not found: ID does not exist" containerID="aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.943359 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426"} err="failed to get container status \"aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426\": rpc error: code = NotFound desc = could not find container \"aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426\": container with ID starting with aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426 not found: ID does not exist" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.978345 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-utilities\") pod \"25afb423-bb97-4560-9e0a-369f39227c3f\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.978415 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c7g2\" (UniqueName: \"kubernetes.io/projected/25afb423-bb97-4560-9e0a-369f39227c3f-kube-api-access-8c7g2\") pod \"25afb423-bb97-4560-9e0a-369f39227c3f\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.978710 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-catalog-content\") pod \"25afb423-bb97-4560-9e0a-369f39227c3f\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.979443 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-utilities" (OuterVolumeSpecName: "utilities") pod "25afb423-bb97-4560-9e0a-369f39227c3f" (UID: "25afb423-bb97-4560-9e0a-369f39227c3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.990652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25afb423-bb97-4560-9e0a-369f39227c3f-kube-api-access-8c7g2" (OuterVolumeSpecName: "kube-api-access-8c7g2") pod "25afb423-bb97-4560-9e0a-369f39227c3f" (UID: "25afb423-bb97-4560-9e0a-369f39227c3f"). InnerVolumeSpecName "kube-api-access-8c7g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:28 crc kubenswrapper[4820]: I0221 08:18:28.050258 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25afb423-bb97-4560-9e0a-369f39227c3f" (UID: "25afb423-bb97-4560-9e0a-369f39227c3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:28 crc kubenswrapper[4820]: I0221 08:18:28.081262 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:28 crc kubenswrapper[4820]: I0221 08:18:28.081308 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c7g2\" (UniqueName: \"kubernetes.io/projected/25afb423-bb97-4560-9e0a-369f39227c3f-kube-api-access-8c7g2\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:28 crc kubenswrapper[4820]: I0221 08:18:28.081323 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:28 crc kubenswrapper[4820]: I0221 08:18:28.139292 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v9fdw"] Feb 21 08:18:28 crc kubenswrapper[4820]: I0221 08:18:28.148286 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v9fdw"] Feb 21 08:18:28 crc kubenswrapper[4820]: I0221 08:18:28.304905 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4qwrz" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="registry-server" probeResult="failure" output=< Feb 21 08:18:28 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:18:28 crc kubenswrapper[4820]: > Feb 21 08:18:29 crc kubenswrapper[4820]: I0221 08:18:29.742519 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" path="/var/lib/kubelet/pods/25afb423-bb97-4560-9e0a-369f39227c3f/volumes" Feb 21 08:18:33 crc kubenswrapper[4820]: I0221 08:18:33.696801 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:18:33 crc kubenswrapper[4820]: E0221 08:18:33.697154 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:18:37 crc kubenswrapper[4820]: I0221 08:18:37.299022 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:37 crc kubenswrapper[4820]: I0221 08:18:37.359908 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:38 crc kubenswrapper[4820]: I0221 08:18:38.113146 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qwrz"] Feb 21 08:18:38 crc kubenswrapper[4820]: I0221 08:18:38.897217 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4qwrz" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="registry-server" containerID="cri-o://41f7bbabb1f4aac78768cdd9f7b87492a53e7441d938d33beceedfbebbdd13ab" gracePeriod=2 Feb 21 08:18:39 crc kubenswrapper[4820]: I0221 08:18:39.908679 4820 generic.go:334] "Generic (PLEG): container finished" podID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerID="41f7bbabb1f4aac78768cdd9f7b87492a53e7441d938d33beceedfbebbdd13ab" exitCode=0 Feb 21 08:18:39 crc kubenswrapper[4820]: I0221 08:18:39.908721 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qwrz" event={"ID":"a4169ca0-c75e-496a-9d08-a1fe753df974","Type":"ContainerDied","Data":"41f7bbabb1f4aac78768cdd9f7b87492a53e7441d938d33beceedfbebbdd13ab"} Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.562541 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.674281 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvvfc\" (UniqueName: \"kubernetes.io/projected/a4169ca0-c75e-496a-9d08-a1fe753df974-kube-api-access-pvvfc\") pod \"a4169ca0-c75e-496a-9d08-a1fe753df974\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.674432 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-catalog-content\") pod \"a4169ca0-c75e-496a-9d08-a1fe753df974\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.674596 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-utilities\") pod \"a4169ca0-c75e-496a-9d08-a1fe753df974\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.675401 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-utilities" (OuterVolumeSpecName: "utilities") pod "a4169ca0-c75e-496a-9d08-a1fe753df974" (UID: "a4169ca0-c75e-496a-9d08-a1fe753df974"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.679547 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4169ca0-c75e-496a-9d08-a1fe753df974-kube-api-access-pvvfc" (OuterVolumeSpecName: "kube-api-access-pvvfc") pod "a4169ca0-c75e-496a-9d08-a1fe753df974" (UID: "a4169ca0-c75e-496a-9d08-a1fe753df974"). InnerVolumeSpecName "kube-api-access-pvvfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.687262 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4169ca0-c75e-496a-9d08-a1fe753df974" (UID: "a4169ca0-c75e-496a-9d08-a1fe753df974"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.776838 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.776873 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvvfc\" (UniqueName: \"kubernetes.io/projected/a4169ca0-c75e-496a-9d08-a1fe753df974-kube-api-access-pvvfc\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.776888 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.947295 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qwrz" event={"ID":"a4169ca0-c75e-496a-9d08-a1fe753df974","Type":"ContainerDied","Data":"ba573cf9178ff188dac0401b89f49fc044a73915f950055de346dd0e475d338c"} Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.947942 4820 scope.go:117] "RemoveContainer" containerID="41f7bbabb1f4aac78768cdd9f7b87492a53e7441d938d33beceedfbebbdd13ab" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.947334 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.950719 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l4nch" event={"ID":"f9668bc3-af3a-43af-8ead-9cc596776786","Type":"ContainerStarted","Data":"f93d7049647bbd8ed3612333a8d08a0df9aca74de7fd44b0b8ebf76d66d50711"} Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.968364 4820 scope.go:117] "RemoveContainer" containerID="229284d09cd2a69b1c9acdbdb5c342d63af66ac1afb08bc6312d62ae998ec868" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.974649 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-l4nch" podStartSLOduration=2.241599041 podStartE2EDuration="18.974631152s" podCreationTimestamp="2026-02-21 08:18:25 +0000 UTC" firstStartedPulling="2026-02-21 08:18:26.638849593 +0000 UTC m=+5481.671933791" lastFinishedPulling="2026-02-21 08:18:43.371881684 +0000 UTC m=+5498.404965902" observedRunningTime="2026-02-21 08:18:43.969807982 +0000 UTC m=+5499.002892180" watchObservedRunningTime="2026-02-21 08:18:43.974631152 +0000 UTC m=+5499.007715350" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.995153 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qwrz"] Feb 21 08:18:44 crc kubenswrapper[4820]: I0221 08:18:44.004388 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qwrz"] Feb 21 08:18:44 crc kubenswrapper[4820]: I0221 08:18:44.005897 4820 scope.go:117] "RemoveContainer" containerID="869e57f4b3b81bc2e213a6194eee05e7623a2b65d138f30982cedbf663949894" Feb 21 08:18:45 crc kubenswrapper[4820]: I0221 08:18:45.707556 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" path="/var/lib/kubelet/pods/a4169ca0-c75e-496a-9d08-a1fe753df974/volumes" Feb 21 08:18:47 crc kubenswrapper[4820]: E0221 08:18:47.545578 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9668bc3_af3a_43af_8ead_9cc596776786.slice/crio-conmon-f93d7049647bbd8ed3612333a8d08a0df9aca74de7fd44b0b8ebf76d66d50711.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9668bc3_af3a_43af_8ead_9cc596776786.slice/crio-f93d7049647bbd8ed3612333a8d08a0df9aca74de7fd44b0b8ebf76d66d50711.scope\": RecentStats: unable to find data in memory cache]" Feb 21 08:18:47 crc kubenswrapper[4820]: I0221 08:18:47.987444 4820 generic.go:334] "Generic (PLEG): container finished" podID="f9668bc3-af3a-43af-8ead-9cc596776786" containerID="f93d7049647bbd8ed3612333a8d08a0df9aca74de7fd44b0b8ebf76d66d50711" exitCode=0 Feb 21 08:18:47 crc kubenswrapper[4820]: I0221 08:18:47.987503 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l4nch" event={"ID":"f9668bc3-af3a-43af-8ead-9cc596776786","Type":"ContainerDied","Data":"f93d7049647bbd8ed3612333a8d08a0df9aca74de7fd44b0b8ebf76d66d50711"} Feb 21 08:18:48 crc kubenswrapper[4820]: I0221 08:18:48.697399 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:18:48 crc kubenswrapper[4820]: E0221 08:18:48.697934 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.396574 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.491901 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-db-sync-config-data\") pod \"f9668bc3-af3a-43af-8ead-9cc596776786\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.492011 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlq7v\" (UniqueName: \"kubernetes.io/projected/f9668bc3-af3a-43af-8ead-9cc596776786-kube-api-access-dlq7v\") pod \"f9668bc3-af3a-43af-8ead-9cc596776786\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.492057 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-config-data\") pod \"f9668bc3-af3a-43af-8ead-9cc596776786\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.492126 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-combined-ca-bundle\") pod \"f9668bc3-af3a-43af-8ead-9cc596776786\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.497625 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9668bc3-af3a-43af-8ead-9cc596776786-kube-api-access-dlq7v" (OuterVolumeSpecName: "kube-api-access-dlq7v") pod "f9668bc3-af3a-43af-8ead-9cc596776786" (UID: "f9668bc3-af3a-43af-8ead-9cc596776786"). InnerVolumeSpecName "kube-api-access-dlq7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.498351 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f9668bc3-af3a-43af-8ead-9cc596776786" (UID: "f9668bc3-af3a-43af-8ead-9cc596776786"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.523699 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9668bc3-af3a-43af-8ead-9cc596776786" (UID: "f9668bc3-af3a-43af-8ead-9cc596776786"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.540726 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-config-data" (OuterVolumeSpecName: "config-data") pod "f9668bc3-af3a-43af-8ead-9cc596776786" (UID: "f9668bc3-af3a-43af-8ead-9cc596776786"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.594277 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlq7v\" (UniqueName: \"kubernetes.io/projected/f9668bc3-af3a-43af-8ead-9cc596776786-kube-api-access-dlq7v\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.594321 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.594335 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.594346 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.004520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l4nch" event={"ID":"f9668bc3-af3a-43af-8ead-9cc596776786","Type":"ContainerDied","Data":"2e5505f5eee4bd8eb9dc766daa0ad19069fe0297aec6728a729d6c5540679ff7"} Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.004563 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e5505f5eee4bd8eb9dc766daa0ad19069fe0297aec6728a729d6c5540679ff7" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.004619 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.491774 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-88785db75-n675s"] Feb 21 08:18:50 crc kubenswrapper[4820]: E0221 08:18:50.492227 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="extract-content" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492261 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="extract-content" Feb 21 08:18:50 crc kubenswrapper[4820]: E0221 08:18:50.492276 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="extract-utilities" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492283 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="extract-utilities" Feb 21 08:18:50 crc kubenswrapper[4820]: E0221 08:18:50.492298 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="registry-server" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492305 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="registry-server" Feb 21 08:18:50 crc kubenswrapper[4820]: E0221 08:18:50.492317 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="extract-utilities" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492324 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="extract-utilities" Feb 21 08:18:50 crc kubenswrapper[4820]: E0221 08:18:50.492341 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="registry-server" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492347 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="registry-server" Feb 21 08:18:50 crc kubenswrapper[4820]: E0221 08:18:50.492359 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9668bc3-af3a-43af-8ead-9cc596776786" containerName="glance-db-sync" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492364 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9668bc3-af3a-43af-8ead-9cc596776786" containerName="glance-db-sync" Feb 21 08:18:50 crc kubenswrapper[4820]: E0221 08:18:50.492381 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="extract-content" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492387 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="extract-content" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492572 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="registry-server" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492584 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9668bc3-af3a-43af-8ead-9cc596776786" containerName="glance-db-sync" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492604 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="registry-server" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.493552 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.526708 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-88785db75-n675s"] Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.555531 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhxr5\" (UniqueName: \"kubernetes.io/projected/6c743ad7-6ad8-4c83-b5fe-351c550e9495-kube-api-access-fhxr5\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.555652 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-nb\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.555737 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-config\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.555790 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-dns-svc\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.555819 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-sb\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.586233 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.592834 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.595466 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mrcwm" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.595882 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.604521 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.617374 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.659392 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-config\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.659483 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-sb\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.659510 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-dns-svc\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.659575 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhxr5\" (UniqueName: \"kubernetes.io/projected/6c743ad7-6ad8-4c83-b5fe-351c550e9495-kube-api-access-fhxr5\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.659615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-nb\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.662875 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-nb\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.663205 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-config\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.663835 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-dns-svc\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.666332 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-sb\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.702230 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhxr5\" (UniqueName: \"kubernetes.io/projected/6c743ad7-6ad8-4c83-b5fe-351c550e9495-kube-api-access-fhxr5\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.719404 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.721332 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.723541 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.743752 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.764784 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.764849 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-scripts\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.764925 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.764965 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fms2k\" (UniqueName: \"kubernetes.io/projected/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-kube-api-access-fms2k\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765041 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765088 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765127 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlc65\" (UniqueName: \"kubernetes.io/projected/60859ade-51ea-4da8-84ac-55d16f7b01b8-kube-api-access-wlc65\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765181 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765203 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765232 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-config-data\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765284 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765313 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-logs\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.837685 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.867624 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868379 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fms2k\" (UniqueName: \"kubernetes.io/projected/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-kube-api-access-fms2k\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868470 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868512 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868545 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlc65\" (UniqueName: \"kubernetes.io/projected/60859ade-51ea-4da8-84ac-55d16f7b01b8-kube-api-access-wlc65\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868581 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868604 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868631 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-config-data\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868664 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868687 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-logs\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868748 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868788 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-scripts\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.871807 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.872330 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.873862 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.875037 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-logs\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.877565 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.877802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.884678 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-scripts\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.885276 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.889786 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.892323 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fms2k\" (UniqueName: \"kubernetes.io/projected/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-kube-api-access-fms2k\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.893734 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlc65\" (UniqueName: \"kubernetes.io/projected/60859ade-51ea-4da8-84ac-55d16f7b01b8-kube-api-access-wlc65\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.893911 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-config-data\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.933102 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:18:51 crc kubenswrapper[4820]: I0221 08:18:51.077372 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:18:51 crc kubenswrapper[4820]: I0221 08:18:51.377335 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-88785db75-n675s"] Feb 21 08:18:51 crc kubenswrapper[4820]: W0221 08:18:51.647455 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f08a0d4_93f9_4236_99df_d1b3f77f9efa.slice/crio-9d97f6215dde897fd2aba8803c4f061570830c5ba6c118d63be295bca11e660e WatchSource:0}: Error finding container 9d97f6215dde897fd2aba8803c4f061570830c5ba6c118d63be295bca11e660e: Status 404 returned error can't find the container with id 9d97f6215dde897fd2aba8803c4f061570830c5ba6c118d63be295bca11e660e Feb 21 08:18:51 crc kubenswrapper[4820]: I0221 08:18:51.647606 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:51 crc kubenswrapper[4820]: I0221 08:18:51.800544 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:51 crc kubenswrapper[4820]: W0221 08:18:51.808914 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60859ade_51ea_4da8_84ac_55d16f7b01b8.slice/crio-529d4cdc81ef9e84c4a4652403c54470704bfb63f456d0296db5ab4f2fd2cd68 WatchSource:0}: Error finding container 529d4cdc81ef9e84c4a4652403c54470704bfb63f456d0296db5ab4f2fd2cd68: Status 404 returned error can't find the container with id 529d4cdc81ef9e84c4a4652403c54470704bfb63f456d0296db5ab4f2fd2cd68 Feb 21 08:18:52 crc kubenswrapper[4820]: I0221 08:18:52.053965 4820 generic.go:334] "Generic (PLEG): container finished" podID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerID="8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7" exitCode=0 Feb 21 08:18:52 crc kubenswrapper[4820]: I0221 08:18:52.054026 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-88785db75-n675s" event={"ID":"6c743ad7-6ad8-4c83-b5fe-351c550e9495","Type":"ContainerDied","Data":"8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7"} Feb 21 08:18:52 crc kubenswrapper[4820]: I0221 08:18:52.054053 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-88785db75-n675s" event={"ID":"6c743ad7-6ad8-4c83-b5fe-351c550e9495","Type":"ContainerStarted","Data":"f827b80cd53d1809ff5c55e3c26ee1b57450c8044c04a10d2fb708ccf54ddf5e"} Feb 21 08:18:52 crc kubenswrapper[4820]: I0221 08:18:52.057712 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f08a0d4-93f9-4236-99df-d1b3f77f9efa","Type":"ContainerStarted","Data":"9d97f6215dde897fd2aba8803c4f061570830c5ba6c118d63be295bca11e660e"} Feb 21 08:18:52 crc kubenswrapper[4820]: I0221 08:18:52.059650 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60859ade-51ea-4da8-84ac-55d16f7b01b8","Type":"ContainerStarted","Data":"529d4cdc81ef9e84c4a4652403c54470704bfb63f456d0296db5ab4f2fd2cd68"} Feb 21 08:18:52 crc kubenswrapper[4820]: I0221 08:18:52.171908 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.048669 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.121559 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f08a0d4-93f9-4236-99df-d1b3f77f9efa","Type":"ContainerStarted","Data":"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892"} Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.121638 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f08a0d4-93f9-4236-99df-d1b3f77f9efa","Type":"ContainerStarted","Data":"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9"} Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.121638 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-log" containerID="cri-o://e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9" gracePeriod=30 Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.121781 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-httpd" containerID="cri-o://cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892" gracePeriod=30 Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.134877 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60859ade-51ea-4da8-84ac-55d16f7b01b8","Type":"ContainerStarted","Data":"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7"} Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.143362 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-88785db75-n675s" event={"ID":"6c743ad7-6ad8-4c83-b5fe-351c550e9495","Type":"ContainerStarted","Data":"d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e"} Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.144365 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.177282 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.177263486 podStartE2EDuration="3.177263486s" podCreationTimestamp="2026-02-21 08:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:18:53.172659061 +0000 UTC m=+5508.205743269" watchObservedRunningTime="2026-02-21 08:18:53.177263486 +0000 UTC m=+5508.210347684" Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.262454 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-88785db75-n675s" podStartSLOduration=3.2624326200000002 podStartE2EDuration="3.26243262s" podCreationTimestamp="2026-02-21 08:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:18:53.219744825 +0000 UTC m=+5508.252829023" watchObservedRunningTime="2026-02-21 08:18:53.26243262 +0000 UTC m=+5508.295516818" Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.869579 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.039199 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-scripts\") pod \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.039719 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-httpd-run\") pod \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.039800 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-logs\") pod \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.039876 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-combined-ca-bundle\") pod \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.039916 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fms2k\" (UniqueName: \"kubernetes.io/projected/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-kube-api-access-fms2k\") pod \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.039950 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-config-data\") pod \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.039950 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4f08a0d4-93f9-4236-99df-d1b3f77f9efa" (UID: "4f08a0d4-93f9-4236-99df-d1b3f77f9efa"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.040519 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-logs" (OuterVolumeSpecName: "logs") pod "4f08a0d4-93f9-4236-99df-d1b3f77f9efa" (UID: "4f08a0d4-93f9-4236-99df-d1b3f77f9efa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.040616 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.047203 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-scripts" (OuterVolumeSpecName: "scripts") pod "4f08a0d4-93f9-4236-99df-d1b3f77f9efa" (UID: "4f08a0d4-93f9-4236-99df-d1b3f77f9efa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.048692 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-kube-api-access-fms2k" (OuterVolumeSpecName: "kube-api-access-fms2k") pod "4f08a0d4-93f9-4236-99df-d1b3f77f9efa" (UID: "4f08a0d4-93f9-4236-99df-d1b3f77f9efa"). InnerVolumeSpecName "kube-api-access-fms2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.101646 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f08a0d4-93f9-4236-99df-d1b3f77f9efa" (UID: "4f08a0d4-93f9-4236-99df-d1b3f77f9efa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.123855 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-config-data" (OuterVolumeSpecName: "config-data") pod "4f08a0d4-93f9-4236-99df-d1b3f77f9efa" (UID: "4f08a0d4-93f9-4236-99df-d1b3f77f9efa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.142041 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.142074 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.142085 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fms2k\" (UniqueName: \"kubernetes.io/projected/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-kube-api-access-fms2k\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.142095 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.142102 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.154404 4820 generic.go:334] "Generic (PLEG): container finished" podID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerID="cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892" exitCode=143 Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.154447 4820 generic.go:334] "Generic (PLEG): container finished" podID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerID="e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9" exitCode=143 Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.154500 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.154510 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f08a0d4-93f9-4236-99df-d1b3f77f9efa","Type":"ContainerDied","Data":"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892"} Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.155107 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f08a0d4-93f9-4236-99df-d1b3f77f9efa","Type":"ContainerDied","Data":"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9"} Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.155131 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f08a0d4-93f9-4236-99df-d1b3f77f9efa","Type":"ContainerDied","Data":"9d97f6215dde897fd2aba8803c4f061570830c5ba6c118d63be295bca11e660e"} Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.155205 4820 scope.go:117] "RemoveContainer" containerID="cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.159641 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60859ade-51ea-4da8-84ac-55d16f7b01b8","Type":"ContainerStarted","Data":"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b"} Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.159696 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-log" containerID="cri-o://71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7" gracePeriod=30 Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.159787 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-httpd" containerID="cri-o://3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b" gracePeriod=30 Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.187782 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.187764057 podStartE2EDuration="4.187764057s" podCreationTimestamp="2026-02-21 08:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:18:54.187703675 +0000 UTC m=+5509.220787883" watchObservedRunningTime="2026-02-21 08:18:54.187764057 +0000 UTC m=+5509.220848255" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.232000 4820 scope.go:117] "RemoveContainer" containerID="e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.252988 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.254736 4820 scope.go:117] "RemoveContainer" containerID="cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892" Feb 21 08:18:54 crc kubenswrapper[4820]: E0221 08:18:54.255180 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892\": container with ID starting with cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892 not found: ID does not exist" containerID="cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.255209 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892"} err="failed to get container status \"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892\": rpc error: code = NotFound desc = could not find container \"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892\": container with ID starting with cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892 not found: ID does not exist" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.255229 4820 scope.go:117] "RemoveContainer" containerID="e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9" Feb 21 08:18:54 crc kubenswrapper[4820]: E0221 08:18:54.255489 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9\": container with ID starting with e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9 not found: ID does not exist" containerID="e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.255513 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9"} err="failed to get container status \"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9\": rpc error: code = NotFound desc = could not find container \"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9\": container with ID starting with e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9 not found: ID does not exist" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.255525 4820 scope.go:117] "RemoveContainer" containerID="cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.255801 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892"} err="failed to get container status \"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892\": rpc error: code = NotFound desc = could not find container \"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892\": container with ID starting with cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892 not found: ID does not exist" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.255824 4820 scope.go:117] "RemoveContainer" containerID="e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.256161 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9"} err="failed to get container status \"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9\": rpc error: code = NotFound desc = could not find container \"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9\": container with ID starting with e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9 not found: ID does not exist" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.273343 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.281930 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:54 crc kubenswrapper[4820]: E0221 08:18:54.282376 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-httpd" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.282392 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-httpd" Feb 21 08:18:54 crc kubenswrapper[4820]: E0221 08:18:54.282421 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-log" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.282429 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-log" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.282682 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-log" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.282722 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-httpd" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.283862 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.287050 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.287284 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.301603 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.446991 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.447356 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.447419 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.447442 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-logs\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.447487 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnczc\" (UniqueName: \"kubernetes.io/projected/4b012ae7-d786-413d-82ca-88448b64b4cd-kube-api-access-dnczc\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.447523 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.447546 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.549433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.549517 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.549595 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.549618 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.549685 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.549708 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-logs\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.549757 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnczc\" (UniqueName: \"kubernetes.io/projected/4b012ae7-d786-413d-82ca-88448b64b4cd-kube-api-access-dnczc\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.550793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-logs\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.551000 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.555137 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.555762 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.557321 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.557754 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.568383 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnczc\" (UniqueName: \"kubernetes.io/projected/4b012ae7-d786-413d-82ca-88448b64b4cd-kube-api-access-dnczc\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.614782 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.770370 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.957775 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlc65\" (UniqueName: \"kubernetes.io/projected/60859ade-51ea-4da8-84ac-55d16f7b01b8-kube-api-access-wlc65\") pod \"60859ade-51ea-4da8-84ac-55d16f7b01b8\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.958382 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-logs\") pod \"60859ade-51ea-4da8-84ac-55d16f7b01b8\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.958423 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-combined-ca-bundle\") pod \"60859ade-51ea-4da8-84ac-55d16f7b01b8\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.958584 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-httpd-run\") pod \"60859ade-51ea-4da8-84ac-55d16f7b01b8\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.958641 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-config-data\") pod \"60859ade-51ea-4da8-84ac-55d16f7b01b8\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.958676 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-scripts\") pod \"60859ade-51ea-4da8-84ac-55d16f7b01b8\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.958803 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-logs" (OuterVolumeSpecName: "logs") pod "60859ade-51ea-4da8-84ac-55d16f7b01b8" (UID: "60859ade-51ea-4da8-84ac-55d16f7b01b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.958872 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "60859ade-51ea-4da8-84ac-55d16f7b01b8" (UID: "60859ade-51ea-4da8-84ac-55d16f7b01b8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.959250 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.959269 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.965749 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60859ade-51ea-4da8-84ac-55d16f7b01b8-kube-api-access-wlc65" (OuterVolumeSpecName: "kube-api-access-wlc65") pod "60859ade-51ea-4da8-84ac-55d16f7b01b8" (UID: "60859ade-51ea-4da8-84ac-55d16f7b01b8"). InnerVolumeSpecName "kube-api-access-wlc65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.971421 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-scripts" (OuterVolumeSpecName: "scripts") pod "60859ade-51ea-4da8-84ac-55d16f7b01b8" (UID: "60859ade-51ea-4da8-84ac-55d16f7b01b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.991406 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60859ade-51ea-4da8-84ac-55d16f7b01b8" (UID: "60859ade-51ea-4da8-84ac-55d16f7b01b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.006572 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-config-data" (OuterVolumeSpecName: "config-data") pod "60859ade-51ea-4da8-84ac-55d16f7b01b8" (UID: "60859ade-51ea-4da8-84ac-55d16f7b01b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.061387 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.061421 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.061433 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlc65\" (UniqueName: \"kubernetes.io/projected/60859ade-51ea-4da8-84ac-55d16f7b01b8-kube-api-access-wlc65\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.061444 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.172219 4820 generic.go:334] "Generic (PLEG): container finished" podID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerID="3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b" exitCode=0 Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.172272 4820 generic.go:334] "Generic (PLEG): container finished" podID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerID="71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7" exitCode=143 Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.172270 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60859ade-51ea-4da8-84ac-55d16f7b01b8","Type":"ContainerDied","Data":"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b"} Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.172308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60859ade-51ea-4da8-84ac-55d16f7b01b8","Type":"ContainerDied","Data":"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7"} Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.172321 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60859ade-51ea-4da8-84ac-55d16f7b01b8","Type":"ContainerDied","Data":"529d4cdc81ef9e84c4a4652403c54470704bfb63f456d0296db5ab4f2fd2cd68"} Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.172331 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.172342 4820 scope.go:117] "RemoveContainer" containerID="3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.184095 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:55 crc kubenswrapper[4820]: W0221 08:18:55.185692 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b012ae7_d786_413d_82ca_88448b64b4cd.slice/crio-4481d03455dfe7bfd51fa7956acf3da2923a2d64faccca3c25e6e25bb77ec5a9 WatchSource:0}: Error finding container 4481d03455dfe7bfd51fa7956acf3da2923a2d64faccca3c25e6e25bb77ec5a9: Status 404 returned error can't find the container with id 4481d03455dfe7bfd51fa7956acf3da2923a2d64faccca3c25e6e25bb77ec5a9 Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.243972 4820 scope.go:117] "RemoveContainer" containerID="71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.256725 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.267175 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.282050 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:55 crc kubenswrapper[4820]: E0221 08:18:55.282482 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-httpd" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.282500 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-httpd" Feb 21 08:18:55 crc kubenswrapper[4820]: E0221 08:18:55.282516 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-log" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.282523 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-log" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.282678 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-log" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.282691 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-httpd" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.283588 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.285960 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.286157 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.293869 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.309309 4820 scope.go:117] "RemoveContainer" containerID="3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b" Feb 21 08:18:55 crc kubenswrapper[4820]: E0221 08:18:55.311291 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b\": container with ID starting with 3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b not found: ID does not exist" containerID="3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.311475 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b"} err="failed to get container status \"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b\": rpc error: code = NotFound desc = could not find container \"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b\": container with ID starting with 3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b not found: ID does not exist" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.311622 4820 scope.go:117] "RemoveContainer" containerID="71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7" Feb 21 08:18:55 crc kubenswrapper[4820]: E0221 08:18:55.311975 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7\": container with ID starting with 71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7 not found: ID does not exist" containerID="71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.312063 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7"} err="failed to get container status \"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7\": rpc error: code = NotFound desc = could not find container \"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7\": container with ID starting with 71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7 not found: ID does not exist" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.312142 4820 scope.go:117] "RemoveContainer" containerID="3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.312495 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b"} err="failed to get container status \"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b\": rpc error: code = NotFound desc = could not find container \"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b\": container with ID starting with 3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b not found: ID does not exist" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.313459 4820 scope.go:117] "RemoveContainer" containerID="71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.325630 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7"} err="failed to get container status \"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7\": rpc error: code = NotFound desc = could not find container \"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7\": container with ID starting with 71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7 not found: ID does not exist" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.469931 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.470001 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-logs\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.470050 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.470069 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.470165 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.470227 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tw8z\" (UniqueName: \"kubernetes.io/projected/57f780e9-b685-4b5b-bab3-63b31b794393-kube-api-access-4tw8z\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.470352 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.574614 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.574680 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-logs\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.574733 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.574756 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.575556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.575652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tw8z\" (UniqueName: \"kubernetes.io/projected/57f780e9-b685-4b5b-bab3-63b31b794393-kube-api-access-4tw8z\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.575722 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.576782 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-logs\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.578760 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.579443 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.579728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.587350 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.589350 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.598978 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tw8z\" (UniqueName: \"kubernetes.io/projected/57f780e9-b685-4b5b-bab3-63b31b794393-kube-api-access-4tw8z\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.615077 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.718572 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" path="/var/lib/kubelet/pods/4f08a0d4-93f9-4236-99df-d1b3f77f9efa/volumes" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.719964 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" path="/var/lib/kubelet/pods/60859ade-51ea-4da8-84ac-55d16f7b01b8/volumes" Feb 21 08:18:56 crc kubenswrapper[4820]: I0221 08:18:56.183008 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b012ae7-d786-413d-82ca-88448b64b4cd","Type":"ContainerStarted","Data":"fd2dfabc6a845c58169feb78a970683856b5e0b8c05305224b62a62196765d9f"} Feb 21 08:18:56 crc kubenswrapper[4820]: I0221 08:18:56.183071 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b012ae7-d786-413d-82ca-88448b64b4cd","Type":"ContainerStarted","Data":"4481d03455dfe7bfd51fa7956acf3da2923a2d64faccca3c25e6e25bb77ec5a9"} Feb 21 08:18:56 crc kubenswrapper[4820]: I0221 08:18:56.224193 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:57 crc kubenswrapper[4820]: I0221 08:18:57.194399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57f780e9-b685-4b5b-bab3-63b31b794393","Type":"ContainerStarted","Data":"6cf72bcbf2a073ab72014714c13787a4273dbe3561b7424b9118c55987b585a1"} Feb 21 08:18:58 crc kubenswrapper[4820]: I0221 08:18:58.204227 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57f780e9-b685-4b5b-bab3-63b31b794393","Type":"ContainerStarted","Data":"9c8352c44b67eda0f166f0687429790e5bd49b1d98c898e2089a6c9be067a4f4"} Feb 21 08:18:58 crc kubenswrapper[4820]: I0221 08:18:58.204695 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57f780e9-b685-4b5b-bab3-63b31b794393","Type":"ContainerStarted","Data":"3e9323b3b0ecd38f4bd6801e5bdf943a91f811adc414d781d648c705fbf53dd9"} Feb 21 08:18:58 crc kubenswrapper[4820]: I0221 08:18:58.205930 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b012ae7-d786-413d-82ca-88448b64b4cd","Type":"ContainerStarted","Data":"8384371cb1cb59ce68f65650414ed9165b7cc3f363b2fda166fcb245381ffb64"} Feb 21 08:18:58 crc kubenswrapper[4820]: I0221 08:18:58.231537 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.231519357 podStartE2EDuration="4.231519357s" podCreationTimestamp="2026-02-21 08:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:18:58.226944024 +0000 UTC m=+5513.260028212" watchObservedRunningTime="2026-02-21 08:18:58.231519357 +0000 UTC m=+5513.264603555" Feb 21 08:18:59 crc kubenswrapper[4820]: I0221 08:18:59.239190 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.23916643 podStartE2EDuration="4.23916643s" podCreationTimestamp="2026-02-21 08:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:18:59.232333146 +0000 UTC m=+5514.265417374" watchObservedRunningTime="2026-02-21 08:18:59.23916643 +0000 UTC m=+5514.272250628" Feb 21 08:19:00 crc kubenswrapper[4820]: I0221 08:19:00.696906 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:19:00 crc kubenswrapper[4820]: E0221 08:19:00.697488 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:19:00 crc kubenswrapper[4820]: I0221 08:19:00.840483 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:19:00 crc kubenswrapper[4820]: I0221 08:19:00.902369 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98b448c79-xx42c"] Feb 21 08:19:00 crc kubenswrapper[4820]: I0221 08:19:00.902927 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98b448c79-xx42c" podUID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerName="dnsmasq-dns" containerID="cri-o://6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30" gracePeriod=10 Feb 21 08:19:01 crc kubenswrapper[4820]: I0221 08:19:01.971627 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.004524 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-config\") pod \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.004612 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-nb\") pod \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.004646 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-sb\") pod \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.004748 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-dns-svc\") pod \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.004823 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2shsl\" (UniqueName: \"kubernetes.io/projected/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-kube-api-access-2shsl\") pod \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.018489 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-kube-api-access-2shsl" (OuterVolumeSpecName: "kube-api-access-2shsl") pod "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" (UID: "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d"). InnerVolumeSpecName "kube-api-access-2shsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.052129 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" (UID: "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.053253 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-config" (OuterVolumeSpecName: "config") pod "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" (UID: "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.054080 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" (UID: "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.064549 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" (UID: "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.106450 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.106623 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2shsl\" (UniqueName: \"kubernetes.io/projected/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-kube-api-access-2shsl\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.106689 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.106761 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.106812 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.240211 4820 generic.go:334] "Generic (PLEG): container finished" podID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerID="6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30" exitCode=0 Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.240358 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98b448c79-xx42c" event={"ID":"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d","Type":"ContainerDied","Data":"6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30"} Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.240417 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.240582 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98b448c79-xx42c" event={"ID":"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d","Type":"ContainerDied","Data":"147651eae6f2d4d4506345601d2cf298cfe763874e04c3aa44b45feb488eb2f6"} Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.240686 4820 scope.go:117] "RemoveContainer" containerID="6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.266641 4820 scope.go:117] "RemoveContainer" containerID="00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.275178 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98b448c79-xx42c"] Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.282936 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98b448c79-xx42c"] Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.300101 4820 scope.go:117] "RemoveContainer" containerID="6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30" Feb 21 08:19:02 crc kubenswrapper[4820]: E0221 08:19:02.300541 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30\": container with ID starting with 6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30 not found: ID does not exist" containerID="6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.300575 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30"} err="failed to get container status \"6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30\": rpc error: code = NotFound desc = could not find container \"6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30\": container with ID starting with 6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30 not found: ID does not exist" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.300600 4820 scope.go:117] "RemoveContainer" containerID="00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b" Feb 21 08:19:02 crc kubenswrapper[4820]: E0221 08:19:02.300943 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b\": container with ID starting with 00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b not found: ID does not exist" containerID="00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.300963 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b"} err="failed to get container status \"00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b\": rpc error: code = NotFound desc = could not find container \"00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b\": container with ID starting with 00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b not found: ID does not exist" Feb 21 08:19:03 crc kubenswrapper[4820]: I0221 08:19:03.706500 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" path="/var/lib/kubelet/pods/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d/volumes" Feb 21 08:19:04 crc kubenswrapper[4820]: I0221 08:19:04.615887 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 08:19:04 crc kubenswrapper[4820]: I0221 08:19:04.615967 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 08:19:04 crc kubenswrapper[4820]: I0221 08:19:04.667407 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 08:19:04 crc kubenswrapper[4820]: I0221 08:19:04.671916 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 08:19:05 crc kubenswrapper[4820]: I0221 08:19:05.276381 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 08:19:05 crc kubenswrapper[4820]: I0221 08:19:05.276697 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 08:19:05 crc kubenswrapper[4820]: I0221 08:19:05.615479 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:05 crc kubenswrapper[4820]: I0221 08:19:05.615527 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:05 crc kubenswrapper[4820]: I0221 08:19:05.655986 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:05 crc kubenswrapper[4820]: I0221 08:19:05.660368 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:06 crc kubenswrapper[4820]: I0221 08:19:06.288642 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:06 crc kubenswrapper[4820]: I0221 08:19:06.289840 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:07 crc kubenswrapper[4820]: I0221 08:19:07.399958 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 08:19:07 crc kubenswrapper[4820]: I0221 08:19:07.400054 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 08:19:07 crc kubenswrapper[4820]: I0221 08:19:07.400592 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 08:19:08 crc kubenswrapper[4820]: I0221 08:19:08.301414 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 08:19:08 crc kubenswrapper[4820]: I0221 08:19:08.301726 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 08:19:08 crc kubenswrapper[4820]: I0221 08:19:08.502503 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:08 crc kubenswrapper[4820]: I0221 08:19:08.642564 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:12 crc kubenswrapper[4820]: I0221 08:19:12.698511 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:19:12 crc kubenswrapper[4820]: E0221 08:19:12.699268 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.631418 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-8fv99"] Feb 21 08:19:16 crc kubenswrapper[4820]: E0221 08:19:16.632524 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerName="init" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.632544 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerName="init" Feb 21 08:19:16 crc kubenswrapper[4820]: E0221 08:19:16.632567 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerName="dnsmasq-dns" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.632575 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerName="dnsmasq-dns" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.632771 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerName="dnsmasq-dns" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.633548 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.639097 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9480-account-create-update-bpvlj"] Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.640304 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.653992 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.654001 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8fv99"] Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.663282 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9480-account-create-update-bpvlj"] Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.716380 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4fxq\" (UniqueName: \"kubernetes.io/projected/549ebe18-2d08-41b5-ac23-2321a43dfe38-kube-api-access-g4fxq\") pod \"placement-db-create-8fv99\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.716742 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549ebe18-2d08-41b5-ac23-2321a43dfe38-operator-scripts\") pod \"placement-db-create-8fv99\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.716810 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb2f4\" (UniqueName: \"kubernetes.io/projected/8f96e017-4a70-45ac-9d44-b57829510e53-kube-api-access-qb2f4\") pod \"placement-9480-account-create-update-bpvlj\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.716845 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f96e017-4a70-45ac-9d44-b57829510e53-operator-scripts\") pod \"placement-9480-account-create-update-bpvlj\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.817908 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb2f4\" (UniqueName: \"kubernetes.io/projected/8f96e017-4a70-45ac-9d44-b57829510e53-kube-api-access-qb2f4\") pod \"placement-9480-account-create-update-bpvlj\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.817960 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f96e017-4a70-45ac-9d44-b57829510e53-operator-scripts\") pod \"placement-9480-account-create-update-bpvlj\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.818043 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4fxq\" (UniqueName: \"kubernetes.io/projected/549ebe18-2d08-41b5-ac23-2321a43dfe38-kube-api-access-g4fxq\") pod \"placement-db-create-8fv99\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.818093 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549ebe18-2d08-41b5-ac23-2321a43dfe38-operator-scripts\") pod \"placement-db-create-8fv99\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.818826 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549ebe18-2d08-41b5-ac23-2321a43dfe38-operator-scripts\") pod \"placement-db-create-8fv99\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.818868 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f96e017-4a70-45ac-9d44-b57829510e53-operator-scripts\") pod \"placement-9480-account-create-update-bpvlj\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.838386 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb2f4\" (UniqueName: \"kubernetes.io/projected/8f96e017-4a70-45ac-9d44-b57829510e53-kube-api-access-qb2f4\") pod \"placement-9480-account-create-update-bpvlj\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.844994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4fxq\" (UniqueName: \"kubernetes.io/projected/549ebe18-2d08-41b5-ac23-2321a43dfe38-kube-api-access-g4fxq\") pod \"placement-db-create-8fv99\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.957002 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.964144 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:17 crc kubenswrapper[4820]: I0221 08:19:17.444684 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8fv99"] Feb 21 08:19:17 crc kubenswrapper[4820]: W0221 08:19:17.447567 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod549ebe18_2d08_41b5_ac23_2321a43dfe38.slice/crio-c5ed47ae9aea8f2a72bb435e424c469e245c40de656bdf5f5a4ab097ad5a76f7 WatchSource:0}: Error finding container c5ed47ae9aea8f2a72bb435e424c469e245c40de656bdf5f5a4ab097ad5a76f7: Status 404 returned error can't find the container with id c5ed47ae9aea8f2a72bb435e424c469e245c40de656bdf5f5a4ab097ad5a76f7 Feb 21 08:19:17 crc kubenswrapper[4820]: I0221 08:19:17.518474 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9480-account-create-update-bpvlj"] Feb 21 08:19:18 crc kubenswrapper[4820]: I0221 08:19:18.395691 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9480-account-create-update-bpvlj" event={"ID":"8f96e017-4a70-45ac-9d44-b57829510e53","Type":"ContainerStarted","Data":"d559368b0d2930ebf44224fc90536866334fa2342759e67f4d25212eb003ee23"} Feb 21 08:19:18 crc kubenswrapper[4820]: I0221 08:19:18.395950 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9480-account-create-update-bpvlj" event={"ID":"8f96e017-4a70-45ac-9d44-b57829510e53","Type":"ContainerStarted","Data":"3d3f93ecf4b74cdb08de62607c46381614bc6e69c6bc0134f7564be1fa5177e3"} Feb 21 08:19:18 crc kubenswrapper[4820]: I0221 08:19:18.397125 4820 generic.go:334] "Generic (PLEG): container finished" podID="549ebe18-2d08-41b5-ac23-2321a43dfe38" containerID="0c4429cc6df30d2e093692bf4cbd7627086a28c710ac6ad90f897b0cf49fd1d6" exitCode=0 Feb 21 08:19:18 crc kubenswrapper[4820]: I0221 08:19:18.397174 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8fv99" event={"ID":"549ebe18-2d08-41b5-ac23-2321a43dfe38","Type":"ContainerDied","Data":"0c4429cc6df30d2e093692bf4cbd7627086a28c710ac6ad90f897b0cf49fd1d6"} Feb 21 08:19:18 crc kubenswrapper[4820]: I0221 08:19:18.397201 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8fv99" event={"ID":"549ebe18-2d08-41b5-ac23-2321a43dfe38","Type":"ContainerStarted","Data":"c5ed47ae9aea8f2a72bb435e424c469e245c40de656bdf5f5a4ab097ad5a76f7"} Feb 21 08:19:18 crc kubenswrapper[4820]: I0221 08:19:18.419686 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9480-account-create-update-bpvlj" podStartSLOduration=2.4196327220000002 podStartE2EDuration="2.419632722s" podCreationTimestamp="2026-02-21 08:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:19:18.411524263 +0000 UTC m=+5533.444608461" watchObservedRunningTime="2026-02-21 08:19:18.419632722 +0000 UTC m=+5533.452716920" Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.405852 4820 generic.go:334] "Generic (PLEG): container finished" podID="8f96e017-4a70-45ac-9d44-b57829510e53" containerID="d559368b0d2930ebf44224fc90536866334fa2342759e67f4d25212eb003ee23" exitCode=0 Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.405931 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9480-account-create-update-bpvlj" event={"ID":"8f96e017-4a70-45ac-9d44-b57829510e53","Type":"ContainerDied","Data":"d559368b0d2930ebf44224fc90536866334fa2342759e67f4d25212eb003ee23"} Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.727295 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8fv99" Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.868810 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4fxq\" (UniqueName: \"kubernetes.io/projected/549ebe18-2d08-41b5-ac23-2321a43dfe38-kube-api-access-g4fxq\") pod \"549ebe18-2d08-41b5-ac23-2321a43dfe38\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.868889 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549ebe18-2d08-41b5-ac23-2321a43dfe38-operator-scripts\") pod \"549ebe18-2d08-41b5-ac23-2321a43dfe38\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.869408 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/549ebe18-2d08-41b5-ac23-2321a43dfe38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "549ebe18-2d08-41b5-ac23-2321a43dfe38" (UID: "549ebe18-2d08-41b5-ac23-2321a43dfe38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.871168 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549ebe18-2d08-41b5-ac23-2321a43dfe38-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.880539 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/549ebe18-2d08-41b5-ac23-2321a43dfe38-kube-api-access-g4fxq" (OuterVolumeSpecName: "kube-api-access-g4fxq") pod "549ebe18-2d08-41b5-ac23-2321a43dfe38" (UID: "549ebe18-2d08-41b5-ac23-2321a43dfe38"). InnerVolumeSpecName "kube-api-access-g4fxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.973202 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4fxq\" (UniqueName: \"kubernetes.io/projected/549ebe18-2d08-41b5-ac23-2321a43dfe38-kube-api-access-g4fxq\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.415386 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8fv99" event={"ID":"549ebe18-2d08-41b5-ac23-2321a43dfe38","Type":"ContainerDied","Data":"c5ed47ae9aea8f2a72bb435e424c469e245c40de656bdf5f5a4ab097ad5a76f7"} Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.415734 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5ed47ae9aea8f2a72bb435e424c469e245c40de656bdf5f5a4ab097ad5a76f7" Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.415406 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8fv99" Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.759029 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.889548 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f96e017-4a70-45ac-9d44-b57829510e53-operator-scripts\") pod \"8f96e017-4a70-45ac-9d44-b57829510e53\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.889642 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb2f4\" (UniqueName: \"kubernetes.io/projected/8f96e017-4a70-45ac-9d44-b57829510e53-kube-api-access-qb2f4\") pod \"8f96e017-4a70-45ac-9d44-b57829510e53\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.890542 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f96e017-4a70-45ac-9d44-b57829510e53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f96e017-4a70-45ac-9d44-b57829510e53" (UID: "8f96e017-4a70-45ac-9d44-b57829510e53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.895891 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f96e017-4a70-45ac-9d44-b57829510e53-kube-api-access-qb2f4" (OuterVolumeSpecName: "kube-api-access-qb2f4") pod "8f96e017-4a70-45ac-9d44-b57829510e53" (UID: "8f96e017-4a70-45ac-9d44-b57829510e53"). InnerVolumeSpecName "kube-api-access-qb2f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.992555 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f96e017-4a70-45ac-9d44-b57829510e53-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.992608 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb2f4\" (UniqueName: \"kubernetes.io/projected/8f96e017-4a70-45ac-9d44-b57829510e53-kube-api-access-qb2f4\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:21 crc kubenswrapper[4820]: I0221 08:19:21.424376 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9480-account-create-update-bpvlj" event={"ID":"8f96e017-4a70-45ac-9d44-b57829510e53","Type":"ContainerDied","Data":"3d3f93ecf4b74cdb08de62607c46381614bc6e69c6bc0134f7564be1fa5177e3"} Feb 21 08:19:21 crc kubenswrapper[4820]: I0221 08:19:21.424416 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d3f93ecf4b74cdb08de62607c46381614bc6e69c6bc0134f7564be1fa5177e3" Feb 21 08:19:21 crc kubenswrapper[4820]: I0221 08:19:21.424452 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.005975 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-v696w"] Feb 21 08:19:22 crc kubenswrapper[4820]: E0221 08:19:22.006516 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f96e017-4a70-45ac-9d44-b57829510e53" containerName="mariadb-account-create-update" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.006534 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f96e017-4a70-45ac-9d44-b57829510e53" containerName="mariadb-account-create-update" Feb 21 08:19:22 crc kubenswrapper[4820]: E0221 08:19:22.006587 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549ebe18-2d08-41b5-ac23-2321a43dfe38" containerName="mariadb-database-create" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.006594 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="549ebe18-2d08-41b5-ac23-2321a43dfe38" containerName="mariadb-database-create" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.006835 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f96e017-4a70-45ac-9d44-b57829510e53" containerName="mariadb-account-create-update" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.006858 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="549ebe18-2d08-41b5-ac23-2321a43dfe38" containerName="mariadb-database-create" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.007727 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.010626 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.010891 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6lbmf" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.017661 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.026997 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bf64f4875-cnv6v"] Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.028717 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.040832 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-v696w"] Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.061616 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf64f4875-cnv6v"] Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.110698 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-combined-ca-bundle\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.110794 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffe0144-e67b-4ea7-8212-5989f992997e-logs\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.110888 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8rlr\" (UniqueName: \"kubernetes.io/projected/8ffe0144-e67b-4ea7-8212-5989f992997e-kube-api-access-l8rlr\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.110906 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-config-data\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.111167 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-scripts\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213078 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-dns-svc\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213224 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-scripts\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213329 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213367 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-config\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213393 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-combined-ca-bundle\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213416 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c28bf\" (UniqueName: \"kubernetes.io/projected/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-kube-api-access-c28bf\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213726 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffe0144-e67b-4ea7-8212-5989f992997e-logs\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213837 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.214006 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffe0144-e67b-4ea7-8212-5989f992997e-logs\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.214045 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8rlr\" (UniqueName: \"kubernetes.io/projected/8ffe0144-e67b-4ea7-8212-5989f992997e-kube-api-access-l8rlr\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.214076 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-config-data\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.217801 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-config-data\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.219120 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-combined-ca-bundle\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.228252 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-scripts\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.246307 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8rlr\" (UniqueName: \"kubernetes.io/projected/8ffe0144-e67b-4ea7-8212-5989f992997e-kube-api-access-l8rlr\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.316110 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.316223 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-dns-svc\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.316322 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.316348 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-config\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.317100 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.317135 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c28bf\" (UniqueName: \"kubernetes.io/projected/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-kube-api-access-c28bf\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.317211 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.317432 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-dns-svc\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.317538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-config\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.327436 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.338608 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c28bf\" (UniqueName: \"kubernetes.io/projected/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-kube-api-access-c28bf\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.363926 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.824330 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-v696w"] Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.914819 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf64f4875-cnv6v"] Feb 21 08:19:23 crc kubenswrapper[4820]: I0221 08:19:23.453162 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v696w" event={"ID":"8ffe0144-e67b-4ea7-8212-5989f992997e","Type":"ContainerStarted","Data":"5eb479809f1af10797af2f9da4d5f4c6b0d824de6d6f0cac15a90f617c5be024"} Feb 21 08:19:23 crc kubenswrapper[4820]: I0221 08:19:23.455759 4820 generic.go:334] "Generic (PLEG): container finished" podID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerID="700487684b5f87fbcc92aad3f9b93678a16e6a2aeaee18e715699139b2b75390" exitCode=0 Feb 21 08:19:23 crc kubenswrapper[4820]: I0221 08:19:23.455819 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" event={"ID":"5b6b45ed-f167-4479-8f6c-f0e2aa72b046","Type":"ContainerDied","Data":"700487684b5f87fbcc92aad3f9b93678a16e6a2aeaee18e715699139b2b75390"} Feb 21 08:19:23 crc kubenswrapper[4820]: I0221 08:19:23.455855 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" event={"ID":"5b6b45ed-f167-4479-8f6c-f0e2aa72b046","Type":"ContainerStarted","Data":"e9e0ecab29aed0ecb81b655dc50c26ef2c09f8bf912783336d03514cdc73e15c"} Feb 21 08:19:23 crc kubenswrapper[4820]: I0221 08:19:23.697298 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:19:23 crc kubenswrapper[4820]: E0221 08:19:23.697985 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:19:24 crc kubenswrapper[4820]: I0221 08:19:24.471890 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" event={"ID":"5b6b45ed-f167-4479-8f6c-f0e2aa72b046","Type":"ContainerStarted","Data":"208d3681faccb269d263339aeb15942d8136498788c9e7df32c0db9f8d79e526"} Feb 21 08:19:24 crc kubenswrapper[4820]: I0221 08:19:24.472283 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:24 crc kubenswrapper[4820]: I0221 08:19:24.501010 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" podStartSLOduration=3.500985904 podStartE2EDuration="3.500985904s" podCreationTimestamp="2026-02-21 08:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:19:24.497962182 +0000 UTC m=+5539.531046380" watchObservedRunningTime="2026-02-21 08:19:24.500985904 +0000 UTC m=+5539.534070102" Feb 21 08:19:27 crc kubenswrapper[4820]: I0221 08:19:27.501965 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v696w" event={"ID":"8ffe0144-e67b-4ea7-8212-5989f992997e","Type":"ContainerStarted","Data":"5b643310775fbc512d74f27daced1ed65eb8590a166407d6e244cc44ba3b9077"} Feb 21 08:19:27 crc kubenswrapper[4820]: I0221 08:19:27.522049 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-v696w" podStartSLOduration=2.9146010479999998 podStartE2EDuration="6.522030293s" podCreationTimestamp="2026-02-21 08:19:21 +0000 UTC" firstStartedPulling="2026-02-21 08:19:22.82805528 +0000 UTC m=+5537.861139478" lastFinishedPulling="2026-02-21 08:19:26.435484525 +0000 UTC m=+5541.468568723" observedRunningTime="2026-02-21 08:19:27.515809855 +0000 UTC m=+5542.548894053" watchObservedRunningTime="2026-02-21 08:19:27.522030293 +0000 UTC m=+5542.555114481" Feb 21 08:19:28 crc kubenswrapper[4820]: I0221 08:19:28.517581 4820 generic.go:334] "Generic (PLEG): container finished" podID="8ffe0144-e67b-4ea7-8212-5989f992997e" containerID="5b643310775fbc512d74f27daced1ed65eb8590a166407d6e244cc44ba3b9077" exitCode=0 Feb 21 08:19:28 crc kubenswrapper[4820]: I0221 08:19:28.518010 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v696w" event={"ID":"8ffe0144-e67b-4ea7-8212-5989f992997e","Type":"ContainerDied","Data":"5b643310775fbc512d74f27daced1ed65eb8590a166407d6e244cc44ba3b9077"} Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.841988 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v696w" Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.980176 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-config-data\") pod \"8ffe0144-e67b-4ea7-8212-5989f992997e\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.980264 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-scripts\") pod \"8ffe0144-e67b-4ea7-8212-5989f992997e\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.980337 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffe0144-e67b-4ea7-8212-5989f992997e-logs\") pod \"8ffe0144-e67b-4ea7-8212-5989f992997e\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.980443 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-combined-ca-bundle\") pod \"8ffe0144-e67b-4ea7-8212-5989f992997e\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.980465 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8rlr\" (UniqueName: \"kubernetes.io/projected/8ffe0144-e67b-4ea7-8212-5989f992997e-kube-api-access-l8rlr\") pod \"8ffe0144-e67b-4ea7-8212-5989f992997e\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.981096 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ffe0144-e67b-4ea7-8212-5989f992997e-logs" (OuterVolumeSpecName: "logs") pod "8ffe0144-e67b-4ea7-8212-5989f992997e" (UID: "8ffe0144-e67b-4ea7-8212-5989f992997e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.986060 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-scripts" (OuterVolumeSpecName: "scripts") pod "8ffe0144-e67b-4ea7-8212-5989f992997e" (UID: "8ffe0144-e67b-4ea7-8212-5989f992997e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.986084 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ffe0144-e67b-4ea7-8212-5989f992997e-kube-api-access-l8rlr" (OuterVolumeSpecName: "kube-api-access-l8rlr") pod "8ffe0144-e67b-4ea7-8212-5989f992997e" (UID: "8ffe0144-e67b-4ea7-8212-5989f992997e"). InnerVolumeSpecName "kube-api-access-l8rlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.003986 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ffe0144-e67b-4ea7-8212-5989f992997e" (UID: "8ffe0144-e67b-4ea7-8212-5989f992997e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.005884 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-config-data" (OuterVolumeSpecName: "config-data") pod "8ffe0144-e67b-4ea7-8212-5989f992997e" (UID: "8ffe0144-e67b-4ea7-8212-5989f992997e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.083273 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.083742 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8rlr\" (UniqueName: \"kubernetes.io/projected/8ffe0144-e67b-4ea7-8212-5989f992997e-kube-api-access-l8rlr\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.083829 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.083899 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.083962 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffe0144-e67b-4ea7-8212-5989f992997e-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.538218 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v696w" event={"ID":"8ffe0144-e67b-4ea7-8212-5989f992997e","Type":"ContainerDied","Data":"5eb479809f1af10797af2f9da4d5f4c6b0d824de6d6f0cac15a90f617c5be024"} Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.538301 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb479809f1af10797af2f9da4d5f4c6b0d824de6d6f0cac15a90f617c5be024" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.538310 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v696w" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.607481 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-64bd48f99b-s6zl2"] Feb 21 08:19:30 crc kubenswrapper[4820]: E0221 08:19:30.608338 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffe0144-e67b-4ea7-8212-5989f992997e" containerName="placement-db-sync" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.608453 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffe0144-e67b-4ea7-8212-5989f992997e" containerName="placement-db-sync" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.608737 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ffe0144-e67b-4ea7-8212-5989f992997e" containerName="placement-db-sync" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.610078 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.612389 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.612603 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.612746 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.612647 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.618212 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6lbmf" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.624655 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64bd48f99b-s6zl2"] Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.695864 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-public-tls-certs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.696195 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-internal-tls-certs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.696331 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-combined-ca-bundle\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.696521 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qfl2\" (UniqueName: \"kubernetes.io/projected/924c1ab4-a83b-4ab0-9c80-b77489d668f7-kube-api-access-8qfl2\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.696612 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-config-data\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.696692 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/924c1ab4-a83b-4ab0-9c80-b77489d668f7-logs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.696748 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-scripts\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.798439 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-public-tls-certs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.798529 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-internal-tls-certs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.798553 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-combined-ca-bundle\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.798590 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qfl2\" (UniqueName: \"kubernetes.io/projected/924c1ab4-a83b-4ab0-9c80-b77489d668f7-kube-api-access-8qfl2\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.798654 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-config-data\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.798715 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/924c1ab4-a83b-4ab0-9c80-b77489d668f7-logs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.798756 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-scripts\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.801957 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/924c1ab4-a83b-4ab0-9c80-b77489d668f7-logs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.803415 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-scripts\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.804695 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-combined-ca-bundle\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.805851 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-public-tls-certs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.808941 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-internal-tls-certs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.809176 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-config-data\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.818006 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qfl2\" (UniqueName: \"kubernetes.io/projected/924c1ab4-a83b-4ab0-9c80-b77489d668f7-kube-api-access-8qfl2\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.966181 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:31 crc kubenswrapper[4820]: I0221 08:19:31.407180 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64bd48f99b-s6zl2"] Feb 21 08:19:31 crc kubenswrapper[4820]: I0221 08:19:31.554739 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64bd48f99b-s6zl2" event={"ID":"924c1ab4-a83b-4ab0-9c80-b77489d668f7","Type":"ContainerStarted","Data":"ed30b1cf65e3ff9a5ceb4764f72b7377ed1f77feba9f89be05c6adcc62d33326"} Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.365534 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.443716 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-88785db75-n675s"] Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.444386 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-88785db75-n675s" podUID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerName="dnsmasq-dns" containerID="cri-o://d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e" gracePeriod=10 Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.566145 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64bd48f99b-s6zl2" event={"ID":"924c1ab4-a83b-4ab0-9c80-b77489d668f7","Type":"ContainerStarted","Data":"7d43a2e3c545125738fb2eb30d178078f9354a6df411a70662cf7b7924b0c6e4"} Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.566196 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64bd48f99b-s6zl2" event={"ID":"924c1ab4-a83b-4ab0-9c80-b77489d668f7","Type":"ContainerStarted","Data":"276ac3b7db1a4fa6785cd1e4803f1234f811d79abe2463e54c16d64c98d38470"} Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.567882 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.567914 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.592595 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-64bd48f99b-s6zl2" podStartSLOduration=2.592569116 podStartE2EDuration="2.592569116s" podCreationTimestamp="2026-02-21 08:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:19:32.584103057 +0000 UTC m=+5547.617187255" watchObservedRunningTime="2026-02-21 08:19:32.592569116 +0000 UTC m=+5547.625653324" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.300259 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.346132 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-dns-svc\") pod \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.346363 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-config\") pod \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.346422 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-nb\") pod \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.346472 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhxr5\" (UniqueName: \"kubernetes.io/projected/6c743ad7-6ad8-4c83-b5fe-351c550e9495-kube-api-access-fhxr5\") pod \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.346506 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-sb\") pod \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.392456 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c743ad7-6ad8-4c83-b5fe-351c550e9495-kube-api-access-fhxr5" (OuterVolumeSpecName: "kube-api-access-fhxr5") pod "6c743ad7-6ad8-4c83-b5fe-351c550e9495" (UID: "6c743ad7-6ad8-4c83-b5fe-351c550e9495"). InnerVolumeSpecName "kube-api-access-fhxr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.452565 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhxr5\" (UniqueName: \"kubernetes.io/projected/6c743ad7-6ad8-4c83-b5fe-351c550e9495-kube-api-access-fhxr5\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.467711 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c743ad7-6ad8-4c83-b5fe-351c550e9495" (UID: "6c743ad7-6ad8-4c83-b5fe-351c550e9495"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.526729 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c743ad7-6ad8-4c83-b5fe-351c550e9495" (UID: "6c743ad7-6ad8-4c83-b5fe-351c550e9495"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.536089 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c743ad7-6ad8-4c83-b5fe-351c550e9495" (UID: "6c743ad7-6ad8-4c83-b5fe-351c550e9495"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.537756 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-config" (OuterVolumeSpecName: "config") pod "6c743ad7-6ad8-4c83-b5fe-351c550e9495" (UID: "6c743ad7-6ad8-4c83-b5fe-351c550e9495"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.555104 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.555150 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.555168 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.555184 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.575989 4820 generic.go:334] "Generic (PLEG): container finished" podID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerID="d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e" exitCode=0 Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.576118 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-88785db75-n675s" event={"ID":"6c743ad7-6ad8-4c83-b5fe-351c550e9495","Type":"ContainerDied","Data":"d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e"} Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.576192 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.576205 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-88785db75-n675s" event={"ID":"6c743ad7-6ad8-4c83-b5fe-351c550e9495","Type":"ContainerDied","Data":"f827b80cd53d1809ff5c55e3c26ee1b57450c8044c04a10d2fb708ccf54ddf5e"} Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.576228 4820 scope.go:117] "RemoveContainer" containerID="d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.614637 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-88785db75-n675s"] Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.618761 4820 scope.go:117] "RemoveContainer" containerID="8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.625704 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-88785db75-n675s"] Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.638060 4820 scope.go:117] "RemoveContainer" containerID="d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e" Feb 21 08:19:33 crc kubenswrapper[4820]: E0221 08:19:33.639729 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e\": container with ID starting with d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e not found: ID does not exist" containerID="d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.639774 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e"} err="failed to get container status \"d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e\": rpc error: code = NotFound desc = could not find container \"d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e\": container with ID starting with d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e not found: ID does not exist" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.639802 4820 scope.go:117] "RemoveContainer" containerID="8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7" Feb 21 08:19:33 crc kubenswrapper[4820]: E0221 08:19:33.640258 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7\": container with ID starting with 8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7 not found: ID does not exist" containerID="8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.640292 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7"} err="failed to get container status \"8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7\": rpc error: code = NotFound desc = could not find container \"8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7\": container with ID starting with 8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7 not found: ID does not exist" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.707094 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" path="/var/lib/kubelet/pods/6c743ad7-6ad8-4c83-b5fe-351c550e9495/volumes" Feb 21 08:19:35 crc kubenswrapper[4820]: I0221 08:19:35.703748 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:19:35 crc kubenswrapper[4820]: E0221 08:19:35.704517 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:19:46 crc kubenswrapper[4820]: I0221 08:19:46.697154 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:19:46 crc kubenswrapper[4820]: E0221 08:19:46.698088 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:19:58 crc kubenswrapper[4820]: I0221 08:19:58.696550 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:19:58 crc kubenswrapper[4820]: E0221 08:19:58.697341 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:20:02 crc kubenswrapper[4820]: I0221 08:20:02.016785 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:20:02 crc kubenswrapper[4820]: I0221 08:20:02.018276 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:20:12 crc kubenswrapper[4820]: I0221 08:20:12.697217 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:20:12 crc kubenswrapper[4820]: E0221 08:20:12.697925 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:20:23 crc kubenswrapper[4820]: I0221 08:20:23.696790 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:20:23 crc kubenswrapper[4820]: I0221 08:20:23.982481 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"71790bda67cb32788b4b805eefed34727cfa5df22b69da5f6508ea1c43987bd6"} Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.565976 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-48s57"] Feb 21 08:20:25 crc kubenswrapper[4820]: E0221 08:20:25.566889 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerName="init" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.566904 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerName="init" Feb 21 08:20:25 crc kubenswrapper[4820]: E0221 08:20:25.566929 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerName="dnsmasq-dns" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.566938 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerName="dnsmasq-dns" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.567136 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerName="dnsmasq-dns" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.567847 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.587981 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-48s57"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.661529 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-cszw4"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.663774 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.674008 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cszw4"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.759818 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5npmf\" (UniqueName: \"kubernetes.io/projected/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-kube-api-access-5npmf\") pod \"nova-api-db-create-48s57\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.759897 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-operator-scripts\") pod \"nova-api-db-create-48s57\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.766702 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9cdf-account-create-update-r2dfp"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.767894 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.777788 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9cdf-account-create-update-r2dfp"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.779552 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.861492 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96717fc4-053b-4426-ab50-dc0786c2eb7e-operator-scripts\") pod \"nova-cell0-db-create-cszw4\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.861592 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5npmf\" (UniqueName: \"kubernetes.io/projected/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-kube-api-access-5npmf\") pod \"nova-api-db-create-48s57\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.861652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-operator-scripts\") pod \"nova-api-db-create-48s57\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.861724 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nksq\" (UniqueName: \"kubernetes.io/projected/96717fc4-053b-4426-ab50-dc0786c2eb7e-kube-api-access-6nksq\") pod \"nova-cell0-db-create-cszw4\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.861492 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-rllks"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.862977 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-operator-scripts\") pod \"nova-api-db-create-48s57\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.863419 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.874092 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rllks"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.890129 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5npmf\" (UniqueName: \"kubernetes.io/projected/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-kube-api-access-5npmf\") pod \"nova-api-db-create-48s57\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.963727 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8wtn\" (UniqueName: \"kubernetes.io/projected/245926d7-e415-4af9-b793-9546bb73dc0c-kube-api-access-m8wtn\") pod \"nova-api-9cdf-account-create-update-r2dfp\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.963799 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96717fc4-053b-4426-ab50-dc0786c2eb7e-operator-scripts\") pod \"nova-cell0-db-create-cszw4\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.963834 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/245926d7-e415-4af9-b793-9546bb73dc0c-operator-scripts\") pod \"nova-api-9cdf-account-create-update-r2dfp\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.963885 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47106ba-9033-418d-a248-6f7ee03d05e6-operator-scripts\") pod \"nova-cell1-db-create-rllks\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.963932 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nksq\" (UniqueName: \"kubernetes.io/projected/96717fc4-053b-4426-ab50-dc0786c2eb7e-kube-api-access-6nksq\") pod \"nova-cell0-db-create-cszw4\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.963965 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6hlz\" (UniqueName: \"kubernetes.io/projected/e47106ba-9033-418d-a248-6f7ee03d05e6-kube-api-access-j6hlz\") pod \"nova-cell1-db-create-rllks\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.964752 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96717fc4-053b-4426-ab50-dc0786c2eb7e-operator-scripts\") pod \"nova-cell0-db-create-cszw4\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.970667 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7934-account-create-update-tq229"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.972393 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.975046 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.984678 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7934-account-create-update-tq229"] Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:25.997927 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nksq\" (UniqueName: \"kubernetes.io/projected/96717fc4-053b-4426-ab50-dc0786c2eb7e-kube-api-access-6nksq\") pod \"nova-cell0-db-create-cszw4\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.065804 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6hlz\" (UniqueName: \"kubernetes.io/projected/e47106ba-9033-418d-a248-6f7ee03d05e6-kube-api-access-j6hlz\") pod \"nova-cell1-db-create-rllks\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.065937 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8wtn\" (UniqueName: \"kubernetes.io/projected/245926d7-e415-4af9-b793-9546bb73dc0c-kube-api-access-m8wtn\") pod \"nova-api-9cdf-account-create-update-r2dfp\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.065994 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/245926d7-e415-4af9-b793-9546bb73dc0c-operator-scripts\") pod \"nova-api-9cdf-account-create-update-r2dfp\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.066058 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47106ba-9033-418d-a248-6f7ee03d05e6-operator-scripts\") pod \"nova-cell1-db-create-rllks\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.066976 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47106ba-9033-418d-a248-6f7ee03d05e6-operator-scripts\") pod \"nova-cell1-db-create-rllks\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.067791 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/245926d7-e415-4af9-b793-9546bb73dc0c-operator-scripts\") pod \"nova-api-9cdf-account-create-update-r2dfp\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.083062 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8wtn\" (UniqueName: \"kubernetes.io/projected/245926d7-e415-4af9-b793-9546bb73dc0c-kube-api-access-m8wtn\") pod \"nova-api-9cdf-account-create-update-r2dfp\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.086479 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6hlz\" (UniqueName: \"kubernetes.io/projected/e47106ba-9033-418d-a248-6f7ee03d05e6-kube-api-access-j6hlz\") pod \"nova-cell1-db-create-rllks\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.087501 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.168211 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10066581-0763-4940-bcba-cdd983819ef7-operator-scripts\") pod \"nova-cell0-7934-account-create-update-tq229\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.168379 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2vf4\" (UniqueName: \"kubernetes.io/projected/10066581-0763-4940-bcba-cdd983819ef7-kube-api-access-r2vf4\") pod \"nova-cell0-7934-account-create-update-tq229\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.172159 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9237-account-create-update-4lj2f"] Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.173741 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.175983 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.183629 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.187406 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.198406 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9237-account-create-update-4lj2f"] Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.271249 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2vf4\" (UniqueName: \"kubernetes.io/projected/10066581-0763-4940-bcba-cdd983819ef7-kube-api-access-r2vf4\") pod \"nova-cell0-7934-account-create-update-tq229\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.271602 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10066581-0763-4940-bcba-cdd983819ef7-operator-scripts\") pod \"nova-cell0-7934-account-create-update-tq229\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.272950 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10066581-0763-4940-bcba-cdd983819ef7-operator-scripts\") pod \"nova-cell0-7934-account-create-update-tq229\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.282275 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.295212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2vf4\" (UniqueName: \"kubernetes.io/projected/10066581-0763-4940-bcba-cdd983819ef7-kube-api-access-r2vf4\") pod \"nova-cell0-7934-account-create-update-tq229\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.377758 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq2xn\" (UniqueName: \"kubernetes.io/projected/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-kube-api-access-rq2xn\") pod \"nova-cell1-9237-account-create-update-4lj2f\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.377868 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-operator-scripts\") pod \"nova-cell1-9237-account-create-update-4lj2f\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.481881 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq2xn\" (UniqueName: \"kubernetes.io/projected/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-kube-api-access-rq2xn\") pod \"nova-cell1-9237-account-create-update-4lj2f\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.482312 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-operator-scripts\") pod \"nova-cell1-9237-account-create-update-4lj2f\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.483169 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-operator-scripts\") pod \"nova-cell1-9237-account-create-update-4lj2f\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.509851 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq2xn\" (UniqueName: \"kubernetes.io/projected/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-kube-api-access-rq2xn\") pod \"nova-cell1-9237-account-create-update-4lj2f\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.556482 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: W0221 08:20:26.591887 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod245926d7_e415_4af9_b793_9546bb73dc0c.slice/crio-319e9c12dfe25c15c9c8ef35e203fae59520d467954a98a94de7854fb5c587e3 WatchSource:0}: Error finding container 319e9c12dfe25c15c9c8ef35e203fae59520d467954a98a94de7854fb5c587e3: Status 404 returned error can't find the container with id 319e9c12dfe25c15c9c8ef35e203fae59520d467954a98a94de7854fb5c587e3 Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.594262 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.597783 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9cdf-account-create-update-r2dfp"] Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.760564 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-48s57"] Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.800741 4820 scope.go:117] "RemoveContainer" containerID="8c0fb447700e63fa48262f2548cda06bf12aed24885e176faa0195a336f5334d" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.849729 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rllks"] Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.854991 4820 scope.go:117] "RemoveContainer" containerID="bdd13cb8dd27e6491e6118d0d26b3e20fbbf9ce4646a106c500112e253d46472" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.922373 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cszw4"] Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.018313 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cszw4" event={"ID":"96717fc4-053b-4426-ab50-dc0786c2eb7e","Type":"ContainerStarted","Data":"b28a5d09f7c8c35963057eb1b5755c1348789fd11aa98c71600295fa51311131"} Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.020731 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-48s57" event={"ID":"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b","Type":"ContainerStarted","Data":"e3efae41380277c8b69eefd69f6f397f096d20a162b9fb48372fabb1fc853492"} Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.023394 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9237-account-create-update-4lj2f"] Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.024934 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" event={"ID":"245926d7-e415-4af9-b793-9546bb73dc0c","Type":"ContainerStarted","Data":"596a2e41ee647dbd1d667628c46432c71a17e9b1604655abed8696d3d2255d8e"} Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.024980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" event={"ID":"245926d7-e415-4af9-b793-9546bb73dc0c","Type":"ContainerStarted","Data":"319e9c12dfe25c15c9c8ef35e203fae59520d467954a98a94de7854fb5c587e3"} Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.031445 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rllks" event={"ID":"e47106ba-9033-418d-a248-6f7ee03d05e6","Type":"ContainerStarted","Data":"1d6bce569e1e07c17cca1b809961f87cd773e10900559b4307547ed148c330ba"} Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.048015 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" podStartSLOduration=2.047991102 podStartE2EDuration="2.047991102s" podCreationTimestamp="2026-02-21 08:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:20:27.043251663 +0000 UTC m=+5602.076335871" watchObservedRunningTime="2026-02-21 08:20:27.047991102 +0000 UTC m=+5602.081075300" Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.219160 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7934-account-create-update-tq229"] Feb 21 08:20:27 crc kubenswrapper[4820]: W0221 08:20:27.262310 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10066581_0763_4940_bcba_cdd983819ef7.slice/crio-be9e87d5972aa510b52af627215fb09a87576b287e54cee7b2c283ac3cba663f WatchSource:0}: Error finding container be9e87d5972aa510b52af627215fb09a87576b287e54cee7b2c283ac3cba663f: Status 404 returned error can't find the container with id be9e87d5972aa510b52af627215fb09a87576b287e54cee7b2c283ac3cba663f Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.042843 4820 generic.go:334] "Generic (PLEG): container finished" podID="e47106ba-9033-418d-a248-6f7ee03d05e6" containerID="d2cad300294ab354787d808751187ff2212790e752b7fb9cb18149cc806b0681" exitCode=0 Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.043174 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rllks" event={"ID":"e47106ba-9033-418d-a248-6f7ee03d05e6","Type":"ContainerDied","Data":"d2cad300294ab354787d808751187ff2212790e752b7fb9cb18149cc806b0681"} Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.048754 4820 generic.go:334] "Generic (PLEG): container finished" podID="96717fc4-053b-4426-ab50-dc0786c2eb7e" containerID="4752965fe12233721da16be2026cb8f90d08c2deaae354b54d275686b6e0952f" exitCode=0 Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.048809 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cszw4" event={"ID":"96717fc4-053b-4426-ab50-dc0786c2eb7e","Type":"ContainerDied","Data":"4752965fe12233721da16be2026cb8f90d08c2deaae354b54d275686b6e0952f"} Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.050727 4820 generic.go:334] "Generic (PLEG): container finished" podID="77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b" containerID="112dd10479e3747f08f12ee8430488451d124d8475edfb2fee1ed65fd14153d8" exitCode=0 Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.050816 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-48s57" event={"ID":"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b","Type":"ContainerDied","Data":"112dd10479e3747f08f12ee8430488451d124d8475edfb2fee1ed65fd14153d8"} Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.052806 4820 generic.go:334] "Generic (PLEG): container finished" podID="245926d7-e415-4af9-b793-9546bb73dc0c" containerID="596a2e41ee647dbd1d667628c46432c71a17e9b1604655abed8696d3d2255d8e" exitCode=0 Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.052898 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" event={"ID":"245926d7-e415-4af9-b793-9546bb73dc0c","Type":"ContainerDied","Data":"596a2e41ee647dbd1d667628c46432c71a17e9b1604655abed8696d3d2255d8e"} Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.054779 4820 generic.go:334] "Generic (PLEG): container finished" podID="10066581-0763-4940-bcba-cdd983819ef7" containerID="7fef589dd234562a1f8ed9fdd1d4bca07d4fd2cbf607d93270b0548c9a879418" exitCode=0 Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.054849 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7934-account-create-update-tq229" event={"ID":"10066581-0763-4940-bcba-cdd983819ef7","Type":"ContainerDied","Data":"7fef589dd234562a1f8ed9fdd1d4bca07d4fd2cbf607d93270b0548c9a879418"} Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.054875 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7934-account-create-update-tq229" event={"ID":"10066581-0763-4940-bcba-cdd983819ef7","Type":"ContainerStarted","Data":"be9e87d5972aa510b52af627215fb09a87576b287e54cee7b2c283ac3cba663f"} Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.056777 4820 generic.go:334] "Generic (PLEG): container finished" podID="1a418ce3-1a88-442d-9c0a-3aea9ad0cc51" containerID="0fea29e38ddb40995e5831792abda163aa5514fd473324369df5f3b8327ea829" exitCode=0 Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.056843 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9237-account-create-update-4lj2f" event={"ID":"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51","Type":"ContainerDied","Data":"0fea29e38ddb40995e5831792abda163aa5514fd473324369df5f3b8327ea829"} Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.056869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9237-account-create-update-4lj2f" event={"ID":"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51","Type":"ContainerStarted","Data":"0b87d522639e04e72af8d34d9124b4a57eb45c119e4e1cde1e5d5dbfbfa526f7"} Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.455677 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.544847 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq2xn\" (UniqueName: \"kubernetes.io/projected/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-kube-api-access-rq2xn\") pod \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.545066 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-operator-scripts\") pod \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.546102 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a418ce3-1a88-442d-9c0a-3aea9ad0cc51" (UID: "1a418ce3-1a88-442d-9c0a-3aea9ad0cc51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.557517 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-kube-api-access-rq2xn" (OuterVolumeSpecName: "kube-api-access-rq2xn") pod "1a418ce3-1a88-442d-9c0a-3aea9ad0cc51" (UID: "1a418ce3-1a88-442d-9c0a-3aea9ad0cc51"). InnerVolumeSpecName "kube-api-access-rq2xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.647170 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.647203 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq2xn\" (UniqueName: \"kubernetes.io/projected/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-kube-api-access-rq2xn\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.653653 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.663555 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.674903 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.695428 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.718005 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:29 crc kubenswrapper[4820]: E0221 08:20:29.810601 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a418ce3_1a88_442d_9c0a_3aea9ad0cc51.slice\": RecentStats: unable to find data in memory cache]" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851266 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8wtn\" (UniqueName: \"kubernetes.io/projected/245926d7-e415-4af9-b793-9546bb73dc0c-kube-api-access-m8wtn\") pod \"245926d7-e415-4af9-b793-9546bb73dc0c\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851398 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10066581-0763-4940-bcba-cdd983819ef7-operator-scripts\") pod \"10066581-0763-4940-bcba-cdd983819ef7\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851453 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/245926d7-e415-4af9-b793-9546bb73dc0c-operator-scripts\") pod \"245926d7-e415-4af9-b793-9546bb73dc0c\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851470 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-operator-scripts\") pod \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851504 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6hlz\" (UniqueName: \"kubernetes.io/projected/e47106ba-9033-418d-a248-6f7ee03d05e6-kube-api-access-j6hlz\") pod \"e47106ba-9033-418d-a248-6f7ee03d05e6\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851537 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2vf4\" (UniqueName: \"kubernetes.io/projected/10066581-0763-4940-bcba-cdd983819ef7-kube-api-access-r2vf4\") pod \"10066581-0763-4940-bcba-cdd983819ef7\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851579 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5npmf\" (UniqueName: \"kubernetes.io/projected/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-kube-api-access-5npmf\") pod \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851650 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47106ba-9033-418d-a248-6f7ee03d05e6-operator-scripts\") pod \"e47106ba-9033-418d-a248-6f7ee03d05e6\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851690 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nksq\" (UniqueName: \"kubernetes.io/projected/96717fc4-053b-4426-ab50-dc0786c2eb7e-kube-api-access-6nksq\") pod \"96717fc4-053b-4426-ab50-dc0786c2eb7e\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851709 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96717fc4-053b-4426-ab50-dc0786c2eb7e-operator-scripts\") pod \"96717fc4-053b-4426-ab50-dc0786c2eb7e\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851888 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10066581-0763-4940-bcba-cdd983819ef7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10066581-0763-4940-bcba-cdd983819ef7" (UID: "10066581-0763-4940-bcba-cdd983819ef7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.852283 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e47106ba-9033-418d-a248-6f7ee03d05e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e47106ba-9033-418d-a248-6f7ee03d05e6" (UID: "e47106ba-9033-418d-a248-6f7ee03d05e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.852372 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96717fc4-053b-4426-ab50-dc0786c2eb7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96717fc4-053b-4426-ab50-dc0786c2eb7e" (UID: "96717fc4-053b-4426-ab50-dc0786c2eb7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.852492 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/245926d7-e415-4af9-b793-9546bb73dc0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "245926d7-e415-4af9-b793-9546bb73dc0c" (UID: "245926d7-e415-4af9-b793-9546bb73dc0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.852592 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b" (UID: "77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.852866 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10066581-0763-4940-bcba-cdd983819ef7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.852931 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/245926d7-e415-4af9-b793-9546bb73dc0c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.852989 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.853043 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47106ba-9033-418d-a248-6f7ee03d05e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.853115 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96717fc4-053b-4426-ab50-dc0786c2eb7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.855314 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-kube-api-access-5npmf" (OuterVolumeSpecName: "kube-api-access-5npmf") pod "77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b" (UID: "77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b"). InnerVolumeSpecName "kube-api-access-5npmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.855374 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96717fc4-053b-4426-ab50-dc0786c2eb7e-kube-api-access-6nksq" (OuterVolumeSpecName: "kube-api-access-6nksq") pod "96717fc4-053b-4426-ab50-dc0786c2eb7e" (UID: "96717fc4-053b-4426-ab50-dc0786c2eb7e"). InnerVolumeSpecName "kube-api-access-6nksq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.855399 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47106ba-9033-418d-a248-6f7ee03d05e6-kube-api-access-j6hlz" (OuterVolumeSpecName: "kube-api-access-j6hlz") pod "e47106ba-9033-418d-a248-6f7ee03d05e6" (UID: "e47106ba-9033-418d-a248-6f7ee03d05e6"). InnerVolumeSpecName "kube-api-access-j6hlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.855885 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10066581-0763-4940-bcba-cdd983819ef7-kube-api-access-r2vf4" (OuterVolumeSpecName: "kube-api-access-r2vf4") pod "10066581-0763-4940-bcba-cdd983819ef7" (UID: "10066581-0763-4940-bcba-cdd983819ef7"). InnerVolumeSpecName "kube-api-access-r2vf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.858722 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/245926d7-e415-4af9-b793-9546bb73dc0c-kube-api-access-m8wtn" (OuterVolumeSpecName: "kube-api-access-m8wtn") pod "245926d7-e415-4af9-b793-9546bb73dc0c" (UID: "245926d7-e415-4af9-b793-9546bb73dc0c"). InnerVolumeSpecName "kube-api-access-m8wtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.956538 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8wtn\" (UniqueName: \"kubernetes.io/projected/245926d7-e415-4af9-b793-9546bb73dc0c-kube-api-access-m8wtn\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.956575 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6hlz\" (UniqueName: \"kubernetes.io/projected/e47106ba-9033-418d-a248-6f7ee03d05e6-kube-api-access-j6hlz\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.956585 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2vf4\" (UniqueName: \"kubernetes.io/projected/10066581-0763-4940-bcba-cdd983819ef7-kube-api-access-r2vf4\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.956593 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5npmf\" (UniqueName: \"kubernetes.io/projected/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-kube-api-access-5npmf\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.956602 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nksq\" (UniqueName: \"kubernetes.io/projected/96717fc4-053b-4426-ab50-dc0786c2eb7e-kube-api-access-6nksq\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.083614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rllks" event={"ID":"e47106ba-9033-418d-a248-6f7ee03d05e6","Type":"ContainerDied","Data":"1d6bce569e1e07c17cca1b809961f87cd773e10900559b4307547ed148c330ba"} Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.083890 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d6bce569e1e07c17cca1b809961f87cd773e10900559b4307547ed148c330ba" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.083641 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.087272 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cszw4" event={"ID":"96717fc4-053b-4426-ab50-dc0786c2eb7e","Type":"ContainerDied","Data":"b28a5d09f7c8c35963057eb1b5755c1348789fd11aa98c71600295fa51311131"} Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.087371 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b28a5d09f7c8c35963057eb1b5755c1348789fd11aa98c71600295fa51311131" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.087460 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.091626 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.091668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-48s57" event={"ID":"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b","Type":"ContainerDied","Data":"e3efae41380277c8b69eefd69f6f397f096d20a162b9fb48372fabb1fc853492"} Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.091704 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3efae41380277c8b69eefd69f6f397f096d20a162b9fb48372fabb1fc853492" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.100465 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" event={"ID":"245926d7-e415-4af9-b793-9546bb73dc0c","Type":"ContainerDied","Data":"319e9c12dfe25c15c9c8ef35e203fae59520d467954a98a94de7854fb5c587e3"} Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.100642 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319e9c12dfe25c15c9c8ef35e203fae59520d467954a98a94de7854fb5c587e3" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.102137 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.102761 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7934-account-create-update-tq229" event={"ID":"10066581-0763-4940-bcba-cdd983819ef7","Type":"ContainerDied","Data":"be9e87d5972aa510b52af627215fb09a87576b287e54cee7b2c283ac3cba663f"} Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.102800 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be9e87d5972aa510b52af627215fb09a87576b287e54cee7b2c283ac3cba663f" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.103485 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.105574 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9237-account-create-update-4lj2f" event={"ID":"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51","Type":"ContainerDied","Data":"0b87d522639e04e72af8d34d9124b4a57eb45c119e4e1cde1e5d5dbfbfa526f7"} Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.105610 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b87d522639e04e72af8d34d9124b4a57eb45c119e4e1cde1e5d5dbfbfa526f7" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.105689 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.884725 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kjc5t"] Feb 21 08:20:35 crc kubenswrapper[4820]: E0221 08:20:35.885525 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a418ce3-1a88-442d-9c0a-3aea9ad0cc51" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885540 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a418ce3-1a88-442d-9c0a-3aea9ad0cc51" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: E0221 08:20:35.885559 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="245926d7-e415-4af9-b793-9546bb73dc0c" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885567 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="245926d7-e415-4af9-b793-9546bb73dc0c" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: E0221 08:20:35.885582 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96717fc4-053b-4426-ab50-dc0786c2eb7e" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885589 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="96717fc4-053b-4426-ab50-dc0786c2eb7e" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: E0221 08:20:35.885612 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47106ba-9033-418d-a248-6f7ee03d05e6" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885619 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47106ba-9033-418d-a248-6f7ee03d05e6" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: E0221 08:20:35.885638 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10066581-0763-4940-bcba-cdd983819ef7" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885645 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="10066581-0763-4940-bcba-cdd983819ef7" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: E0221 08:20:35.885658 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885665 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885880 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="245926d7-e415-4af9-b793-9546bb73dc0c" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885900 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="10066581-0763-4940-bcba-cdd983819ef7" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885912 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="96717fc4-053b-4426-ab50-dc0786c2eb7e" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885924 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a418ce3-1a88-442d-9c0a-3aea9ad0cc51" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885939 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47106ba-9033-418d-a248-6f7ee03d05e6" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885950 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.886773 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.892812 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.893022 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.893149 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jgh78" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.897056 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kjc5t"] Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.000361 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb2r9\" (UniqueName: \"kubernetes.io/projected/2ae13708-c06f-4967-901f-8ea42fdca38c-kube-api-access-kb2r9\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.000515 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.000943 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-config-data\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.001179 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-scripts\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.103742 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb2r9\" (UniqueName: \"kubernetes.io/projected/2ae13708-c06f-4967-901f-8ea42fdca38c-kube-api-access-kb2r9\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.103895 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.104955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-config-data\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.105012 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-scripts\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.110856 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.111116 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-config-data\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.111383 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-scripts\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.128910 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb2r9\" (UniqueName: \"kubernetes.io/projected/2ae13708-c06f-4967-901f-8ea42fdca38c-kube-api-access-kb2r9\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.217181 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.682131 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kjc5t"] Feb 21 08:20:37 crc kubenswrapper[4820]: I0221 08:20:37.168955 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" event={"ID":"2ae13708-c06f-4967-901f-8ea42fdca38c","Type":"ContainerStarted","Data":"06cc6f9763368b24b66c6c8f88386e1fb22aafbf05dac97365b54086e06e2e4d"} Feb 21 08:20:46 crc kubenswrapper[4820]: I0221 08:20:46.257923 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" event={"ID":"2ae13708-c06f-4967-901f-8ea42fdca38c","Type":"ContainerStarted","Data":"cb3f4ce0b0215a0db2f78f709a8d3c26d681a5c2f85f5e3e4402255224c51737"} Feb 21 08:20:46 crc kubenswrapper[4820]: I0221 08:20:46.272308 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" podStartSLOduration=2.176982809 podStartE2EDuration="11.272229947s" podCreationTimestamp="2026-02-21 08:20:35 +0000 UTC" firstStartedPulling="2026-02-21 08:20:36.688750799 +0000 UTC m=+5611.721834987" lastFinishedPulling="2026-02-21 08:20:45.783997927 +0000 UTC m=+5620.817082125" observedRunningTime="2026-02-21 08:20:46.270554792 +0000 UTC m=+5621.303638990" watchObservedRunningTime="2026-02-21 08:20:46.272229947 +0000 UTC m=+5621.305314145" Feb 21 08:20:52 crc kubenswrapper[4820]: I0221 08:20:52.306904 4820 generic.go:334] "Generic (PLEG): container finished" podID="2ae13708-c06f-4967-901f-8ea42fdca38c" containerID="cb3f4ce0b0215a0db2f78f709a8d3c26d681a5c2f85f5e3e4402255224c51737" exitCode=0 Feb 21 08:20:52 crc kubenswrapper[4820]: I0221 08:20:52.307000 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" event={"ID":"2ae13708-c06f-4967-901f-8ea42fdca38c","Type":"ContainerDied","Data":"cb3f4ce0b0215a0db2f78f709a8d3c26d681a5c2f85f5e3e4402255224c51737"} Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.593112 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.743535 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-config-data\") pod \"2ae13708-c06f-4967-901f-8ea42fdca38c\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.743771 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-combined-ca-bundle\") pod \"2ae13708-c06f-4967-901f-8ea42fdca38c\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.744023 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb2r9\" (UniqueName: \"kubernetes.io/projected/2ae13708-c06f-4967-901f-8ea42fdca38c-kube-api-access-kb2r9\") pod \"2ae13708-c06f-4967-901f-8ea42fdca38c\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.744088 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-scripts\") pod \"2ae13708-c06f-4967-901f-8ea42fdca38c\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.750595 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-scripts" (OuterVolumeSpecName: "scripts") pod "2ae13708-c06f-4967-901f-8ea42fdca38c" (UID: "2ae13708-c06f-4967-901f-8ea42fdca38c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.752606 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae13708-c06f-4967-901f-8ea42fdca38c-kube-api-access-kb2r9" (OuterVolumeSpecName: "kube-api-access-kb2r9") pod "2ae13708-c06f-4967-901f-8ea42fdca38c" (UID: "2ae13708-c06f-4967-901f-8ea42fdca38c"). InnerVolumeSpecName "kube-api-access-kb2r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.770045 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ae13708-c06f-4967-901f-8ea42fdca38c" (UID: "2ae13708-c06f-4967-901f-8ea42fdca38c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.774351 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-config-data" (OuterVolumeSpecName: "config-data") pod "2ae13708-c06f-4967-901f-8ea42fdca38c" (UID: "2ae13708-c06f-4967-901f-8ea42fdca38c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.846671 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.846722 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.846738 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb2r9\" (UniqueName: \"kubernetes.io/projected/2ae13708-c06f-4967-901f-8ea42fdca38c-kube-api-access-kb2r9\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.846749 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.324717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" event={"ID":"2ae13708-c06f-4967-901f-8ea42fdca38c","Type":"ContainerDied","Data":"06cc6f9763368b24b66c6c8f88386e1fb22aafbf05dac97365b54086e06e2e4d"} Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.324757 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06cc6f9763368b24b66c6c8f88386e1fb22aafbf05dac97365b54086e06e2e4d" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.324809 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.415179 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 08:20:54 crc kubenswrapper[4820]: E0221 08:20:54.415891 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae13708-c06f-4967-901f-8ea42fdca38c" containerName="nova-cell0-conductor-db-sync" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.415910 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae13708-c06f-4967-901f-8ea42fdca38c" containerName="nova-cell0-conductor-db-sync" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.416107 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae13708-c06f-4967-901f-8ea42fdca38c" containerName="nova-cell0-conductor-db-sync" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.416745 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.420214 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.420352 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jgh78" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.427292 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.559486 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.559840 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j8bm\" (UniqueName: \"kubernetes.io/projected/ff2505a3-9888-436f-9e92-045fb71aac57-kube-api-access-5j8bm\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.559893 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.661806 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j8bm\" (UniqueName: \"kubernetes.io/projected/ff2505a3-9888-436f-9e92-045fb71aac57-kube-api-access-5j8bm\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.661864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.661931 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.666902 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.676234 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.678698 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j8bm\" (UniqueName: \"kubernetes.io/projected/ff2505a3-9888-436f-9e92-045fb71aac57-kube-api-access-5j8bm\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.744941 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:55 crc kubenswrapper[4820]: I0221 08:20:55.201479 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 08:20:55 crc kubenswrapper[4820]: I0221 08:20:55.333698 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ff2505a3-9888-436f-9e92-045fb71aac57","Type":"ContainerStarted","Data":"e739d22e8a5fb67dd8a38933da1b7cdbf628d65d406c279afc479f8a5e13a79c"} Feb 21 08:20:56 crc kubenswrapper[4820]: I0221 08:20:56.341704 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ff2505a3-9888-436f-9e92-045fb71aac57","Type":"ContainerStarted","Data":"b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e"} Feb 21 08:20:56 crc kubenswrapper[4820]: I0221 08:20:56.343996 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:56 crc kubenswrapper[4820]: I0221 08:20:56.371689 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.371665635 podStartE2EDuration="2.371665635s" podCreationTimestamp="2026-02-21 08:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:20:56.36520565 +0000 UTC m=+5631.398289858" watchObservedRunningTime="2026-02-21 08:20:56.371665635 +0000 UTC m=+5631.404749833" Feb 21 08:21:01 crc kubenswrapper[4820]: I0221 08:21:01.039255 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-l4whm"] Feb 21 08:21:01 crc kubenswrapper[4820]: I0221 08:21:01.050778 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-l4whm"] Feb 21 08:21:01 crc kubenswrapper[4820]: I0221 08:21:01.708397 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d64f747-d529-4e8f-b2ea-11458f16f00c" path="/var/lib/kubelet/pods/8d64f747-d529-4e8f-b2ea-11458f16f00c/volumes" Feb 21 08:21:02 crc kubenswrapper[4820]: I0221 08:21:02.030406 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a50c-account-create-update-p6g4x"] Feb 21 08:21:02 crc kubenswrapper[4820]: I0221 08:21:02.039826 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a50c-account-create-update-p6g4x"] Feb 21 08:21:03 crc kubenswrapper[4820]: I0221 08:21:03.706897 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41e7890-6ac4-4d64-aded-2e5934d7ceee" path="/var/lib/kubelet/pods/e41e7890-6ac4-4d64-aded-2e5934d7ceee/volumes" Feb 21 08:21:04 crc kubenswrapper[4820]: I0221 08:21:04.769286 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.367739 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-lwzsj"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.369806 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.371613 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.371958 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.379580 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lwzsj"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.467977 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.468413 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.468495 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-scripts\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.468629 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mhzj\" (UniqueName: \"kubernetes.io/projected/52c86e8d-fde8-46e2-856f-10b3444f1ed7-kube-api-access-9mhzj\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.534586 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.536067 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.539483 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.561097 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.570497 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.570583 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.570645 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-scripts\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.570690 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mhzj\" (UniqueName: \"kubernetes.io/projected/52c86e8d-fde8-46e2-856f-10b3444f1ed7-kube-api-access-9mhzj\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.576860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-scripts\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.577646 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.584164 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.616883 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mhzj\" (UniqueName: \"kubernetes.io/projected/52c86e8d-fde8-46e2-856f-10b3444f1ed7-kube-api-access-9mhzj\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.640378 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.641647 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.648737 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.656866 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.672475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8123556f-a4ef-4790-ba20-d4b536407aa4-logs\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.672556 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.672601 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f68fb\" (UniqueName: \"kubernetes.io/projected/8123556f-a4ef-4790-ba20-d4b536407aa4-kube-api-access-f68fb\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.672709 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-config-data\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.717276 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.863969 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.864105 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f68fb\" (UniqueName: \"kubernetes.io/projected/8123556f-a4ef-4790-ba20-d4b536407aa4-kube-api-access-f68fb\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.864139 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.864340 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh8qs\" (UniqueName: \"kubernetes.io/projected/5a472f5c-b752-4dc8-84da-8a5801397ff8-kube-api-access-wh8qs\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.864425 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-config-data\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.864518 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-config-data\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.864614 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8123556f-a4ef-4790-ba20-d4b536407aa4-logs\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.872836 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8123556f-a4ef-4790-ba20-d4b536407aa4-logs\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.883261 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.899658 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.901512 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.901620 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.908382 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.910712 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.910982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.912128 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.919780 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f68fb\" (UniqueName: \"kubernetes.io/projected/8123556f-a4ef-4790-ba20-d4b536407aa4-kube-api-access-f68fb\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.931631 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-config-data\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.938099 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65d6fd5f6f-tvl89"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.940335 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.962739 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.966572 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwsmg\" (UniqueName: \"kubernetes.io/projected/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-kube-api-access-vwsmg\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.966631 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt2m6\" (UniqueName: \"kubernetes.io/projected/043ca807-c45c-45f9-b058-8979413aeac6-kube-api-access-qt2m6\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.966673 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh8qs\" (UniqueName: \"kubernetes.io/projected/5a472f5c-b752-4dc8-84da-8a5801397ff8-kube-api-access-wh8qs\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.966714 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-config-data\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.966734 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-config\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.966752 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-sb\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.967503 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-nb\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.967611 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-config-data\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.967742 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.967778 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4q45\" (UniqueName: \"kubernetes.io/projected/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-kube-api-access-s4q45\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.968009 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.968137 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.968206 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.968515 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-dns-svc\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.968560 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043ca807-c45c-45f9-b058-8979413aeac6-logs\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.977149 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.977568 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.993557 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh8qs\" (UniqueName: \"kubernetes.io/projected/5a472f5c-b752-4dc8-84da-8a5801397ff8-kube-api-access-wh8qs\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:05.998695 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.003513 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-config-data\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.009333 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65d6fd5f6f-tvl89"] Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070290 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-config\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070337 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-sb\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070387 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-nb\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070415 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-config-data\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4q45\" (UniqueName: \"kubernetes.io/projected/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-kube-api-access-s4q45\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070546 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070577 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.073975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-dns-svc\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.074023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043ca807-c45c-45f9-b058-8979413aeac6-logs\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.074054 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwsmg\" (UniqueName: \"kubernetes.io/projected/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-kube-api-access-vwsmg\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.074098 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt2m6\" (UniqueName: \"kubernetes.io/projected/043ca807-c45c-45f9-b058-8979413aeac6-kube-api-access-qt2m6\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.076382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-config\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.076599 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-sb\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.076687 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-dns-svc\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.076742 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043ca807-c45c-45f9-b058-8979413aeac6-logs\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.077267 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-nb\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.081130 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.085717 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.091168 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-config-data\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.091619 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.095851 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt2m6\" (UniqueName: \"kubernetes.io/projected/043ca807-c45c-45f9-b058-8979413aeac6-kube-api-access-qt2m6\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.115484 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4q45\" (UniqueName: \"kubernetes.io/projected/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-kube-api-access-s4q45\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.118484 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwsmg\" (UniqueName: \"kubernetes.io/projected/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-kube-api-access-vwsmg\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.147012 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.153317 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.339777 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.351791 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.370876 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.482016 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lwzsj"] Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.645156 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wf76m"] Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.647161 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.656604 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.657065 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.667840 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wf76m"] Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.737257 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.803652 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-config-data\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.804005 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-scripts\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.804064 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.805574 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rcb4\" (UniqueName: \"kubernetes.io/projected/36ace6b1-75c4-451e-b167-1dbe9b2471ca-kube-api-access-2rcb4\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.807169 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.907412 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-config-data\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.907473 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-scripts\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.907529 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.908421 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rcb4\" (UniqueName: \"kubernetes.io/projected/36ace6b1-75c4-451e-b167-1dbe9b2471ca-kube-api-access-2rcb4\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.912652 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-scripts\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.912663 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-config-data\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.927075 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rcb4\" (UniqueName: \"kubernetes.io/projected/36ace6b1-75c4-451e-b167-1dbe9b2471ca-kube-api-access-2rcb4\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.927442 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.077313 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.087198 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65d6fd5f6f-tvl89"] Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.093447 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:07 crc kubenswrapper[4820]: W0221 08:21:07.098698 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod543eb7a9_5b1a_407b_a035_86d3fb8bd55c.slice/crio-5a46ef286aad0cc12fe47e877ef7c7e453f348a471ce2d591279fe8b81e97e5d WatchSource:0}: Error finding container 5a46ef286aad0cc12fe47e877ef7c7e453f348a471ce2d591279fe8b81e97e5d: Status 404 returned error can't find the container with id 5a46ef286aad0cc12fe47e877ef7c7e453f348a471ce2d591279fe8b81e97e5d Feb 21 08:21:07 crc kubenswrapper[4820]: W0221 08:21:07.099202 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod043ca807_c45c_45f9_b058_8979413aeac6.slice/crio-3b980e8732b601b80d1ffcf398981157b065cd2bad0e95ebcee3ba1c15b52991 WatchSource:0}: Error finding container 3b980e8732b601b80d1ffcf398981157b065cd2bad0e95ebcee3ba1c15b52991: Status 404 returned error can't find the container with id 3b980e8732b601b80d1ffcf398981157b065cd2bad0e95ebcee3ba1c15b52991 Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.160047 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.465920 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f","Type":"ContainerStarted","Data":"99c4d061985b5004dc504e31adc8b10c206eef34bacb665b33bf678fab276fd0"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.467558 4820 generic.go:334] "Generic (PLEG): container finished" podID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerID="48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e" exitCode=0 Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.467614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" event={"ID":"543eb7a9-5b1a-407b-a035-86d3fb8bd55c","Type":"ContainerDied","Data":"48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.467635 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" event={"ID":"543eb7a9-5b1a-407b-a035-86d3fb8bd55c","Type":"ContainerStarted","Data":"5a46ef286aad0cc12fe47e877ef7c7e453f348a471ce2d591279fe8b81e97e5d"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.474431 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5a472f5c-b752-4dc8-84da-8a5801397ff8","Type":"ContainerStarted","Data":"5caa9a6ee200ac7417238c6c1cc223745c163ea2c319bd460f9791be7f091ca4"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.476605 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"043ca807-c45c-45f9-b058-8979413aeac6","Type":"ContainerStarted","Data":"3b980e8732b601b80d1ffcf398981157b065cd2bad0e95ebcee3ba1c15b52991"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.489756 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8123556f-a4ef-4790-ba20-d4b536407aa4","Type":"ContainerStarted","Data":"c6cf6173ff4cbbbdf79ca4f92812a53845b01aed980187b59b47e17fad3eb8ae"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.502069 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lwzsj" event={"ID":"52c86e8d-fde8-46e2-856f-10b3444f1ed7","Type":"ContainerStarted","Data":"401aa1cc9b63be74ac5d6945ba27a6f816214705ac3c1915809f5508ba44aa76"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.502396 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lwzsj" event={"ID":"52c86e8d-fde8-46e2-856f-10b3444f1ed7","Type":"ContainerStarted","Data":"c2ad7c05678bee154a5231477a5e3c8eb4dd07e5941382838f63cb24895b8bcc"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.520494 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-lwzsj" podStartSLOduration=2.520474156 podStartE2EDuration="2.520474156s" podCreationTimestamp="2026-02-21 08:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:07.519562411 +0000 UTC m=+5642.552646619" watchObservedRunningTime="2026-02-21 08:21:07.520474156 +0000 UTC m=+5642.553558344" Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.624472 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wf76m"] Feb 21 08:21:07 crc kubenswrapper[4820]: W0221 08:21:07.686921 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36ace6b1_75c4_451e_b167_1dbe9b2471ca.slice/crio-fcef1c95189e1d1cbe83f797897774c231b278e47ff1327f8aa9600ea97cd960 WatchSource:0}: Error finding container fcef1c95189e1d1cbe83f797897774c231b278e47ff1327f8aa9600ea97cd960: Status 404 returned error can't find the container with id fcef1c95189e1d1cbe83f797897774c231b278e47ff1327f8aa9600ea97cd960 Feb 21 08:21:08 crc kubenswrapper[4820]: I0221 08:21:08.513771 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" event={"ID":"543eb7a9-5b1a-407b-a035-86d3fb8bd55c","Type":"ContainerStarted","Data":"51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505"} Feb 21 08:21:08 crc kubenswrapper[4820]: I0221 08:21:08.514074 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:08 crc kubenswrapper[4820]: I0221 08:21:08.519133 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wf76m" event={"ID":"36ace6b1-75c4-451e-b167-1dbe9b2471ca","Type":"ContainerStarted","Data":"fcef1c95189e1d1cbe83f797897774c231b278e47ff1327f8aa9600ea97cd960"} Feb 21 08:21:08 crc kubenswrapper[4820]: I0221 08:21:08.541044 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" podStartSLOduration=3.541024098 podStartE2EDuration="3.541024098s" podCreationTimestamp="2026-02-21 08:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:08.533692499 +0000 UTC m=+5643.566776707" watchObservedRunningTime="2026-02-21 08:21:08.541024098 +0000 UTC m=+5643.574108286" Feb 21 08:21:09 crc kubenswrapper[4820]: I0221 08:21:09.657971 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:09 crc kubenswrapper[4820]: I0221 08:21:09.674555 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.543269 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8123556f-a4ef-4790-ba20-d4b536407aa4","Type":"ContainerStarted","Data":"9e70968a86e176e47e2922eb14163f81af1cb43fac7d427b684a015f1317dec9"} Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.543558 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8123556f-a4ef-4790-ba20-d4b536407aa4","Type":"ContainerStarted","Data":"17e973db1a5b07340cbe98babca001d855617644f0fde707f9123e20e87ae051"} Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.546074 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f","Type":"ContainerStarted","Data":"fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755"} Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.546192 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755" gracePeriod=30 Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.549053 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wf76m" event={"ID":"36ace6b1-75c4-451e-b167-1dbe9b2471ca","Type":"ContainerStarted","Data":"a7ecd295ca0eafe15872121cc6b4a13c28ba3248d670e0f6c5e46ff6c31cdd60"} Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.550794 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5a472f5c-b752-4dc8-84da-8a5801397ff8","Type":"ContainerStarted","Data":"bfdc5163d6fe6d6b59a7132c9f1e428154dc6cdcc364550f86f8dc3503c6792e"} Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.561117 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"043ca807-c45c-45f9-b058-8979413aeac6","Type":"ContainerStarted","Data":"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a"} Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.561174 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"043ca807-c45c-45f9-b058-8979413aeac6","Type":"ContainerStarted","Data":"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5"} Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.561309 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-log" containerID="cri-o://16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5" gracePeriod=30 Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.561348 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-metadata" containerID="cri-o://17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a" gracePeriod=30 Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.574892 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.73170202 podStartE2EDuration="5.574867227s" podCreationTimestamp="2026-02-21 08:21:05 +0000 UTC" firstStartedPulling="2026-02-21 08:21:06.755755514 +0000 UTC m=+5641.788839712" lastFinishedPulling="2026-02-21 08:21:09.598920721 +0000 UTC m=+5644.632004919" observedRunningTime="2026-02-21 08:21:10.568998169 +0000 UTC m=+5645.602082367" watchObservedRunningTime="2026-02-21 08:21:10.574867227 +0000 UTC m=+5645.607951425" Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.592900 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-wf76m" podStartSLOduration=4.592873334 podStartE2EDuration="4.592873334s" podCreationTimestamp="2026-02-21 08:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:10.587442367 +0000 UTC m=+5645.620526585" watchObservedRunningTime="2026-02-21 08:21:10.592873334 +0000 UTC m=+5645.625957532" Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.623549 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.139332759 podStartE2EDuration="5.623528824s" podCreationTimestamp="2026-02-21 08:21:05 +0000 UTC" firstStartedPulling="2026-02-21 08:21:07.109934937 +0000 UTC m=+5642.143019135" lastFinishedPulling="2026-02-21 08:21:09.594131002 +0000 UTC m=+5644.627215200" observedRunningTime="2026-02-21 08:21:10.610378548 +0000 UTC m=+5645.643462746" watchObservedRunningTime="2026-02-21 08:21:10.623528824 +0000 UTC m=+5645.656613022" Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.642915 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.154031487 podStartE2EDuration="5.642889388s" podCreationTimestamp="2026-02-21 08:21:05 +0000 UTC" firstStartedPulling="2026-02-21 08:21:07.11005507 +0000 UTC m=+5642.143139268" lastFinishedPulling="2026-02-21 08:21:09.598912971 +0000 UTC m=+5644.631997169" observedRunningTime="2026-02-21 08:21:10.634950983 +0000 UTC m=+5645.668035181" watchObservedRunningTime="2026-02-21 08:21:10.642889388 +0000 UTC m=+5645.675973586" Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.685333 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.917787255 podStartE2EDuration="5.685305415s" podCreationTimestamp="2026-02-21 08:21:05 +0000 UTC" firstStartedPulling="2026-02-21 08:21:06.816472737 +0000 UTC m=+5641.849556935" lastFinishedPulling="2026-02-21 08:21:09.583990897 +0000 UTC m=+5644.617075095" observedRunningTime="2026-02-21 08:21:10.653675249 +0000 UTC m=+5645.686759457" watchObservedRunningTime="2026-02-21 08:21:10.685305415 +0000 UTC m=+5645.718389623" Feb 21 08:21:10 crc kubenswrapper[4820]: E0221 08:21:10.860711 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod043ca807_c45c_45f9_b058_8979413aeac6.slice/crio-16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod043ca807_c45c_45f9_b058_8979413aeac6.slice/crio-conmon-17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod043ca807_c45c_45f9_b058_8979413aeac6.slice/crio-conmon-16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5.scope\": RecentStats: unable to find data in memory cache]" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.147758 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.203707 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.319887 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-combined-ca-bundle\") pod \"043ca807-c45c-45f9-b058-8979413aeac6\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.319935 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-config-data\") pod \"043ca807-c45c-45f9-b058-8979413aeac6\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.320152 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt2m6\" (UniqueName: \"kubernetes.io/projected/043ca807-c45c-45f9-b058-8979413aeac6-kube-api-access-qt2m6\") pod \"043ca807-c45c-45f9-b058-8979413aeac6\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.320185 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043ca807-c45c-45f9-b058-8979413aeac6-logs\") pod \"043ca807-c45c-45f9-b058-8979413aeac6\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.321025 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043ca807-c45c-45f9-b058-8979413aeac6-logs" (OuterVolumeSpecName: "logs") pod "043ca807-c45c-45f9-b058-8979413aeac6" (UID: "043ca807-c45c-45f9-b058-8979413aeac6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.325170 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043ca807-c45c-45f9-b058-8979413aeac6-kube-api-access-qt2m6" (OuterVolumeSpecName: "kube-api-access-qt2m6") pod "043ca807-c45c-45f9-b058-8979413aeac6" (UID: "043ca807-c45c-45f9-b058-8979413aeac6"). InnerVolumeSpecName "kube-api-access-qt2m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.344600 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "043ca807-c45c-45f9-b058-8979413aeac6" (UID: "043ca807-c45c-45f9-b058-8979413aeac6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.346873 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-config-data" (OuterVolumeSpecName: "config-data") pod "043ca807-c45c-45f9-b058-8979413aeac6" (UID: "043ca807-c45c-45f9-b058-8979413aeac6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.353018 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.422872 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.422907 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.422927 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt2m6\" (UniqueName: \"kubernetes.io/projected/043ca807-c45c-45f9-b058-8979413aeac6-kube-api-access-qt2m6\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.422973 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043ca807-c45c-45f9-b058-8979413aeac6-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.571122 4820 generic.go:334] "Generic (PLEG): container finished" podID="043ca807-c45c-45f9-b058-8979413aeac6" containerID="17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a" exitCode=0 Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.571162 4820 generic.go:334] "Generic (PLEG): container finished" podID="043ca807-c45c-45f9-b058-8979413aeac6" containerID="16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5" exitCode=143 Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.572380 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.574613 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"043ca807-c45c-45f9-b058-8979413aeac6","Type":"ContainerDied","Data":"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a"} Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.575341 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"043ca807-c45c-45f9-b058-8979413aeac6","Type":"ContainerDied","Data":"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5"} Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.575361 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"043ca807-c45c-45f9-b058-8979413aeac6","Type":"ContainerDied","Data":"3b980e8732b601b80d1ffcf398981157b065cd2bad0e95ebcee3ba1c15b52991"} Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.575384 4820 scope.go:117] "RemoveContainer" containerID="17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.597755 4820 scope.go:117] "RemoveContainer" containerID="16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.643557 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.655489 4820 scope.go:117] "RemoveContainer" containerID="17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a" Feb 21 08:21:11 crc kubenswrapper[4820]: E0221 08:21:11.656016 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a\": container with ID starting with 17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a not found: ID does not exist" containerID="17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.656067 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a"} err="failed to get container status \"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a\": rpc error: code = NotFound desc = could not find container \"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a\": container with ID starting with 17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a not found: ID does not exist" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.656103 4820 scope.go:117] "RemoveContainer" containerID="16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5" Feb 21 08:21:11 crc kubenswrapper[4820]: E0221 08:21:11.656421 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5\": container with ID starting with 16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5 not found: ID does not exist" containerID="16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.656477 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5"} err="failed to get container status \"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5\": rpc error: code = NotFound desc = could not find container \"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5\": container with ID starting with 16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5 not found: ID does not exist" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.656509 4820 scope.go:117] "RemoveContainer" containerID="17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.664178 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a"} err="failed to get container status \"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a\": rpc error: code = NotFound desc = could not find container \"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a\": container with ID starting with 17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a not found: ID does not exist" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.664248 4820 scope.go:117] "RemoveContainer" containerID="16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.664715 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5"} err="failed to get container status \"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5\": rpc error: code = NotFound desc = could not find container \"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5\": container with ID starting with 16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5 not found: ID does not exist" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.672076 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.688493 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:11 crc kubenswrapper[4820]: E0221 08:21:11.689074 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-log" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.689089 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-log" Feb 21 08:21:11 crc kubenswrapper[4820]: E0221 08:21:11.689101 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-metadata" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.689107 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-metadata" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.689352 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-log" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.689372 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-metadata" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.691008 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.696209 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.696576 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.719229 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043ca807-c45c-45f9-b058-8979413aeac6" path="/var/lib/kubelet/pods/043ca807-c45c-45f9-b058-8979413aeac6/volumes" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.719837 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.738000 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.738079 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwmch\" (UniqueName: \"kubernetes.io/projected/ab47881d-31b3-45fa-bc72-fce64a00567c-kube-api-access-rwmch\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.738180 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.738251 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-config-data\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.738270 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47881d-31b3-45fa-bc72-fce64a00567c-logs\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.839888 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.839940 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-config-data\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.839960 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47881d-31b3-45fa-bc72-fce64a00567c-logs\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.840072 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.840104 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwmch\" (UniqueName: \"kubernetes.io/projected/ab47881d-31b3-45fa-bc72-fce64a00567c-kube-api-access-rwmch\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.841049 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47881d-31b3-45fa-bc72-fce64a00567c-logs\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.858630 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.860402 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-config-data\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.861190 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwmch\" (UniqueName: \"kubernetes.io/projected/ab47881d-31b3-45fa-bc72-fce64a00567c-kube-api-access-rwmch\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.878876 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:12 crc kubenswrapper[4820]: I0221 08:21:12.018643 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:12 crc kubenswrapper[4820]: W0221 08:21:12.509177 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab47881d_31b3_45fa_bc72_fce64a00567c.slice/crio-bfa92a96c07e4a67af9cabb8bf252e7373e398045ca9af503079b824e85db397 WatchSource:0}: Error finding container bfa92a96c07e4a67af9cabb8bf252e7373e398045ca9af503079b824e85db397: Status 404 returned error can't find the container with id bfa92a96c07e4a67af9cabb8bf252e7373e398045ca9af503079b824e85db397 Feb 21 08:21:12 crc kubenswrapper[4820]: I0221 08:21:12.524382 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:12 crc kubenswrapper[4820]: I0221 08:21:12.591785 4820 generic.go:334] "Generic (PLEG): container finished" podID="52c86e8d-fde8-46e2-856f-10b3444f1ed7" containerID="401aa1cc9b63be74ac5d6945ba27a6f816214705ac3c1915809f5508ba44aa76" exitCode=0 Feb 21 08:21:12 crc kubenswrapper[4820]: I0221 08:21:12.591882 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lwzsj" event={"ID":"52c86e8d-fde8-46e2-856f-10b3444f1ed7","Type":"ContainerDied","Data":"401aa1cc9b63be74ac5d6945ba27a6f816214705ac3c1915809f5508ba44aa76"} Feb 21 08:21:12 crc kubenswrapper[4820]: I0221 08:21:12.601303 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab47881d-31b3-45fa-bc72-fce64a00567c","Type":"ContainerStarted","Data":"bfa92a96c07e4a67af9cabb8bf252e7373e398045ca9af503079b824e85db397"} Feb 21 08:21:13 crc kubenswrapper[4820]: I0221 08:21:13.612020 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab47881d-31b3-45fa-bc72-fce64a00567c","Type":"ContainerStarted","Data":"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe"} Feb 21 08:21:13 crc kubenswrapper[4820]: I0221 08:21:13.612068 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab47881d-31b3-45fa-bc72-fce64a00567c","Type":"ContainerStarted","Data":"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5"} Feb 21 08:21:13 crc kubenswrapper[4820]: I0221 08:21:13.614500 4820 generic.go:334] "Generic (PLEG): container finished" podID="36ace6b1-75c4-451e-b167-1dbe9b2471ca" containerID="a7ecd295ca0eafe15872121cc6b4a13c28ba3248d670e0f6c5e46ff6c31cdd60" exitCode=0 Feb 21 08:21:13 crc kubenswrapper[4820]: I0221 08:21:13.614727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wf76m" event={"ID":"36ace6b1-75c4-451e-b167-1dbe9b2471ca","Type":"ContainerDied","Data":"a7ecd295ca0eafe15872121cc6b4a13c28ba3248d670e0f6c5e46ff6c31cdd60"} Feb 21 08:21:13 crc kubenswrapper[4820]: I0221 08:21:13.651583 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.651560062 podStartE2EDuration="2.651560062s" podCreationTimestamp="2026-02-21 08:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:13.638888119 +0000 UTC m=+5648.671972317" watchObservedRunningTime="2026-02-21 08:21:13.651560062 +0000 UTC m=+5648.684644260" Feb 21 08:21:13 crc kubenswrapper[4820]: I0221 08:21:13.987950 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.084527 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-scripts\") pod \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.084588 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mhzj\" (UniqueName: \"kubernetes.io/projected/52c86e8d-fde8-46e2-856f-10b3444f1ed7-kube-api-access-9mhzj\") pod \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.084617 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data\") pod \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.084713 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-combined-ca-bundle\") pod \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.091585 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-scripts" (OuterVolumeSpecName: "scripts") pod "52c86e8d-fde8-46e2-856f-10b3444f1ed7" (UID: "52c86e8d-fde8-46e2-856f-10b3444f1ed7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.093490 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52c86e8d-fde8-46e2-856f-10b3444f1ed7-kube-api-access-9mhzj" (OuterVolumeSpecName: "kube-api-access-9mhzj") pod "52c86e8d-fde8-46e2-856f-10b3444f1ed7" (UID: "52c86e8d-fde8-46e2-856f-10b3444f1ed7"). InnerVolumeSpecName "kube-api-access-9mhzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:14 crc kubenswrapper[4820]: E0221 08:21:14.110851 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data podName:52c86e8d-fde8-46e2-856f-10b3444f1ed7 nodeName:}" failed. No retries permitted until 2026-02-21 08:21:14.610827728 +0000 UTC m=+5649.643911926 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data") pod "52c86e8d-fde8-46e2-856f-10b3444f1ed7" (UID: "52c86e8d-fde8-46e2-856f-10b3444f1ed7") : error deleting /var/lib/kubelet/pods/52c86e8d-fde8-46e2-856f-10b3444f1ed7/volume-subpaths: remove /var/lib/kubelet/pods/52c86e8d-fde8-46e2-856f-10b3444f1ed7/volume-subpaths: no such file or directory Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.113409 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52c86e8d-fde8-46e2-856f-10b3444f1ed7" (UID: "52c86e8d-fde8-46e2-856f-10b3444f1ed7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.187141 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.187180 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mhzj\" (UniqueName: \"kubernetes.io/projected/52c86e8d-fde8-46e2-856f-10b3444f1ed7-kube-api-access-9mhzj\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.187193 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.629751 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.630267 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lwzsj" event={"ID":"52c86e8d-fde8-46e2-856f-10b3444f1ed7","Type":"ContainerDied","Data":"c2ad7c05678bee154a5231477a5e3c8eb4dd07e5941382838f63cb24895b8bcc"} Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.630302 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ad7c05678bee154a5231477a5e3c8eb4dd07e5941382838f63cb24895b8bcc" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.697158 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data\") pod \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.700966 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data" (OuterVolumeSpecName: "config-data") pod "52c86e8d-fde8-46e2-856f-10b3444f1ed7" (UID: "52c86e8d-fde8-46e2-856f-10b3444f1ed7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.812946 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.973776 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.974043 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-log" containerID="cri-o://17e973db1a5b07340cbe98babca001d855617644f0fde707f9123e20e87ae051" gracePeriod=30 Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.974482 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-api" containerID="cri-o://9e70968a86e176e47e2922eb14163f81af1cb43fac7d427b684a015f1317dec9" gracePeriod=30 Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.002846 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.003069 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5a472f5c-b752-4dc8-84da-8a5801397ff8" containerName="nova-scheduler-scheduler" containerID="cri-o://bfdc5163d6fe6d6b59a7132c9f1e428154dc6cdcc364550f86f8dc3503c6792e" gracePeriod=30 Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.017580 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.047332 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-spcxr"] Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.058255 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-spcxr"] Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.098338 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.118981 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rcb4\" (UniqueName: \"kubernetes.io/projected/36ace6b1-75c4-451e-b167-1dbe9b2471ca-kube-api-access-2rcb4\") pod \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.119345 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-combined-ca-bundle\") pod \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.124043 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ace6b1-75c4-451e-b167-1dbe9b2471ca-kube-api-access-2rcb4" (OuterVolumeSpecName: "kube-api-access-2rcb4") pod "36ace6b1-75c4-451e-b167-1dbe9b2471ca" (UID: "36ace6b1-75c4-451e-b167-1dbe9b2471ca"). InnerVolumeSpecName "kube-api-access-2rcb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.150135 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36ace6b1-75c4-451e-b167-1dbe9b2471ca" (UID: "36ace6b1-75c4-451e-b167-1dbe9b2471ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.221343 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-scripts\") pod \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.221386 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-config-data\") pod \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.222132 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.222177 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rcb4\" (UniqueName: \"kubernetes.io/projected/36ace6b1-75c4-451e-b167-1dbe9b2471ca-kube-api-access-2rcb4\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.224908 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-scripts" (OuterVolumeSpecName: "scripts") pod "36ace6b1-75c4-451e-b167-1dbe9b2471ca" (UID: "36ace6b1-75c4-451e-b167-1dbe9b2471ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.244176 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-config-data" (OuterVolumeSpecName: "config-data") pod "36ace6b1-75c4-451e-b167-1dbe9b2471ca" (UID: "36ace6b1-75c4-451e-b167-1dbe9b2471ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.323747 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.323785 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.639677 4820 generic.go:334] "Generic (PLEG): container finished" podID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerID="9e70968a86e176e47e2922eb14163f81af1cb43fac7d427b684a015f1317dec9" exitCode=0 Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.640047 4820 generic.go:334] "Generic (PLEG): container finished" podID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerID="17e973db1a5b07340cbe98babca001d855617644f0fde707f9123e20e87ae051" exitCode=143 Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.639806 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8123556f-a4ef-4790-ba20-d4b536407aa4","Type":"ContainerDied","Data":"9e70968a86e176e47e2922eb14163f81af1cb43fac7d427b684a015f1317dec9"} Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.640150 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8123556f-a4ef-4790-ba20-d4b536407aa4","Type":"ContainerDied","Data":"17e973db1a5b07340cbe98babca001d855617644f0fde707f9123e20e87ae051"} Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.641733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wf76m" event={"ID":"36ace6b1-75c4-451e-b167-1dbe9b2471ca","Type":"ContainerDied","Data":"fcef1c95189e1d1cbe83f797897774c231b278e47ff1327f8aa9600ea97cd960"} Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.641765 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcef1c95189e1d1cbe83f797897774c231b278e47ff1327f8aa9600ea97cd960" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.641809 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-log" containerID="cri-o://6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5" gracePeriod=30 Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.641932 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.642044 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-metadata" containerID="cri-o://e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe" gracePeriod=30 Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.715850 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="211ff6a9-0360-4606-92ca-cd4904494ff6" path="/var/lib/kubelet/pods/211ff6a9-0360-4606-92ca-cd4904494ff6/volumes" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.744422 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.744897 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 08:21:15 crc kubenswrapper[4820]: E0221 08:21:15.746220 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ace6b1-75c4-451e-b167-1dbe9b2471ca" containerName="nova-cell1-conductor-db-sync" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746261 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ace6b1-75c4-451e-b167-1dbe9b2471ca" containerName="nova-cell1-conductor-db-sync" Feb 21 08:21:15 crc kubenswrapper[4820]: E0221 08:21:15.746283 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-api" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746289 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-api" Feb 21 08:21:15 crc kubenswrapper[4820]: E0221 08:21:15.746304 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52c86e8d-fde8-46e2-856f-10b3444f1ed7" containerName="nova-manage" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746310 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="52c86e8d-fde8-46e2-856f-10b3444f1ed7" containerName="nova-manage" Feb 21 08:21:15 crc kubenswrapper[4820]: E0221 08:21:15.746342 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-log" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746348 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-log" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746523 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ace6b1-75c4-451e-b167-1dbe9b2471ca" containerName="nova-cell1-conductor-db-sync" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746536 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="52c86e8d-fde8-46e2-856f-10b3444f1ed7" containerName="nova-manage" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746545 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-log" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746558 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-api" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.747356 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.749356 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.755835 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.954388 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8123556f-a4ef-4790-ba20-d4b536407aa4-logs\") pod \"8123556f-a4ef-4790-ba20-d4b536407aa4\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.954491 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f68fb\" (UniqueName: \"kubernetes.io/projected/8123556f-a4ef-4790-ba20-d4b536407aa4-kube-api-access-f68fb\") pod \"8123556f-a4ef-4790-ba20-d4b536407aa4\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.954519 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-config-data\") pod \"8123556f-a4ef-4790-ba20-d4b536407aa4\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.954685 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-combined-ca-bundle\") pod \"8123556f-a4ef-4790-ba20-d4b536407aa4\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.954952 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8123556f-a4ef-4790-ba20-d4b536407aa4-logs" (OuterVolumeSpecName: "logs") pod "8123556f-a4ef-4790-ba20-d4b536407aa4" (UID: "8123556f-a4ef-4790-ba20-d4b536407aa4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.954975 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbj92\" (UniqueName: \"kubernetes.io/projected/bef3408d-c90c-48d8-85fa-366e68d6e66d-kube-api-access-gbj92\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.955633 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.956017 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.956250 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8123556f-a4ef-4790-ba20-d4b536407aa4-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.960076 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8123556f-a4ef-4790-ba20-d4b536407aa4-kube-api-access-f68fb" (OuterVolumeSpecName: "kube-api-access-f68fb") pod "8123556f-a4ef-4790-ba20-d4b536407aa4" (UID: "8123556f-a4ef-4790-ba20-d4b536407aa4"). InnerVolumeSpecName "kube-api-access-f68fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.981410 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8123556f-a4ef-4790-ba20-d4b536407aa4" (UID: "8123556f-a4ef-4790-ba20-d4b536407aa4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.990569 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-config-data" (OuterVolumeSpecName: "config-data") pod "8123556f-a4ef-4790-ba20-d4b536407aa4" (UID: "8123556f-a4ef-4790-ba20-d4b536407aa4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.056832 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.056892 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbj92\" (UniqueName: \"kubernetes.io/projected/bef3408d-c90c-48d8-85fa-366e68d6e66d-kube-api-access-gbj92\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.056980 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.057038 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f68fb\" (UniqueName: \"kubernetes.io/projected/8123556f-a4ef-4790-ba20-d4b536407aa4-kube-api-access-f68fb\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.057050 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.057058 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.066724 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.072930 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.081858 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbj92\" (UniqueName: \"kubernetes.io/projected/bef3408d-c90c-48d8-85fa-366e68d6e66d-kube-api-access-gbj92\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.231665 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.259736 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47881d-31b3-45fa-bc72-fce64a00567c-logs\") pod \"ab47881d-31b3-45fa-bc72-fce64a00567c\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.260099 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-nova-metadata-tls-certs\") pod \"ab47881d-31b3-45fa-bc72-fce64a00567c\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.260113 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab47881d-31b3-45fa-bc72-fce64a00567c-logs" (OuterVolumeSpecName: "logs") pod "ab47881d-31b3-45fa-bc72-fce64a00567c" (UID: "ab47881d-31b3-45fa-bc72-fce64a00567c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.260171 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-combined-ca-bundle\") pod \"ab47881d-31b3-45fa-bc72-fce64a00567c\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.260198 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwmch\" (UniqueName: \"kubernetes.io/projected/ab47881d-31b3-45fa-bc72-fce64a00567c-kube-api-access-rwmch\") pod \"ab47881d-31b3-45fa-bc72-fce64a00567c\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.260291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-config-data\") pod \"ab47881d-31b3-45fa-bc72-fce64a00567c\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.260657 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47881d-31b3-45fa-bc72-fce64a00567c-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.264350 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab47881d-31b3-45fa-bc72-fce64a00567c-kube-api-access-rwmch" (OuterVolumeSpecName: "kube-api-access-rwmch") pod "ab47881d-31b3-45fa-bc72-fce64a00567c" (UID: "ab47881d-31b3-45fa-bc72-fce64a00567c"). InnerVolumeSpecName "kube-api-access-rwmch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.282878 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab47881d-31b3-45fa-bc72-fce64a00567c" (UID: "ab47881d-31b3-45fa-bc72-fce64a00567c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.283980 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-config-data" (OuterVolumeSpecName: "config-data") pod "ab47881d-31b3-45fa-bc72-fce64a00567c" (UID: "ab47881d-31b3-45fa-bc72-fce64a00567c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.305535 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ab47881d-31b3-45fa-bc72-fce64a00567c" (UID: "ab47881d-31b3-45fa-bc72-fce64a00567c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.361884 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.361923 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.361933 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwmch\" (UniqueName: \"kubernetes.io/projected/ab47881d-31b3-45fa-bc72-fce64a00567c-kube-api-access-rwmch\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.361941 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.365184 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.374537 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.445277 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf64f4875-cnv6v"] Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.445587 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" podUID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerName="dnsmasq-dns" containerID="cri-o://208d3681faccb269d263339aeb15942d8136498788c9e7df32c0db9f8d79e526" gracePeriod=10 Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.653926 4820 generic.go:334] "Generic (PLEG): container finished" podID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerID="208d3681faccb269d263339aeb15942d8136498788c9e7df32c0db9f8d79e526" exitCode=0 Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.654016 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" event={"ID":"5b6b45ed-f167-4479-8f6c-f0e2aa72b046","Type":"ContainerDied","Data":"208d3681faccb269d263339aeb15942d8136498788c9e7df32c0db9f8d79e526"} Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.662407 4820 generic.go:334] "Generic (PLEG): container finished" podID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerID="e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe" exitCode=0 Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.662444 4820 generic.go:334] "Generic (PLEG): container finished" podID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerID="6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5" exitCode=143 Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.662587 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.663702 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab47881d-31b3-45fa-bc72-fce64a00567c","Type":"ContainerDied","Data":"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe"} Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.663767 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab47881d-31b3-45fa-bc72-fce64a00567c","Type":"ContainerDied","Data":"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5"} Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.663781 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab47881d-31b3-45fa-bc72-fce64a00567c","Type":"ContainerDied","Data":"bfa92a96c07e4a67af9cabb8bf252e7373e398045ca9af503079b824e85db397"} Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.663797 4820 scope.go:117] "RemoveContainer" containerID="e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.667624 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8123556f-a4ef-4790-ba20-d4b536407aa4","Type":"ContainerDied","Data":"c6cf6173ff4cbbbdf79ca4f92812a53845b01aed980187b59b47e17fad3eb8ae"} Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.667697 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.706563 4820 scope.go:117] "RemoveContainer" containerID="6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.722924 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.748226 4820 scope.go:117] "RemoveContainer" containerID="e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.769737 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: E0221 08:21:16.779011 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe\": container with ID starting with e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe not found: ID does not exist" containerID="e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.779064 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe"} err="failed to get container status \"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe\": rpc error: code = NotFound desc = could not find container \"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe\": container with ID starting with e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe not found: ID does not exist" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.779102 4820 scope.go:117] "RemoveContainer" containerID="6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5" Feb 21 08:21:16 crc kubenswrapper[4820]: E0221 08:21:16.779979 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5\": container with ID starting with 6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5 not found: ID does not exist" containerID="6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.780029 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5"} err="failed to get container status \"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5\": rpc error: code = NotFound desc = could not find container \"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5\": container with ID starting with 6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5 not found: ID does not exist" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.780056 4820 scope.go:117] "RemoveContainer" containerID="e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.780597 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe"} err="failed to get container status \"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe\": rpc error: code = NotFound desc = could not find container \"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe\": container with ID starting with e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe not found: ID does not exist" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.780641 4820 scope.go:117] "RemoveContainer" containerID="6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.781028 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5"} err="failed to get container status \"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5\": rpc error: code = NotFound desc = could not find container \"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5\": container with ID starting with 6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5 not found: ID does not exist" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.781053 4820 scope.go:117] "RemoveContainer" containerID="9e70968a86e176e47e2922eb14163f81af1cb43fac7d427b684a015f1317dec9" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.787088 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.801909 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.809914 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: E0221 08:21:16.810460 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-metadata" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.810481 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-metadata" Feb 21 08:21:16 crc kubenswrapper[4820]: E0221 08:21:16.810511 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-log" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.810517 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-log" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.810702 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-log" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.810718 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-metadata" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.811873 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.816595 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.818343 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.832347 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.833939 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.839568 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.845000 4820 scope.go:117] "RemoveContainer" containerID="17e973db1a5b07340cbe98babca001d855617644f0fde707f9123e20e87ae051" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.845524 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.852622 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.882890 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: W0221 08:21:16.883923 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef3408d_c90c_48d8_85fa_366e68d6e66d.slice/crio-2a0a262a8895e6cca872ab1c86adcc262df33c931a7351df26a0b7545670d96f WatchSource:0}: Error finding container 2a0a262a8895e6cca872ab1c86adcc262df33c931a7351df26a0b7545670d96f: Status 404 returned error can't find the container with id 2a0a262a8895e6cca872ab1c86adcc262df33c931a7351df26a0b7545670d96f Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972726 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972793 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96f965dc-1e6a-477d-84d7-1c6a0c66d940-logs\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972839 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mb6\" (UniqueName: \"kubernetes.io/projected/96f965dc-1e6a-477d-84d7-1c6a0c66d940-kube-api-access-w5mb6\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972867 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-config-data\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972888 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz6c2\" (UniqueName: \"kubernetes.io/projected/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-kube-api-access-qz6c2\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972906 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972940 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972998 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-logs\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.973020 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-config-data\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:16.983020 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074380 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-logs\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074729 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-config-data\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074789 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074828 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96f965dc-1e6a-477d-84d7-1c6a0c66d940-logs\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074883 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5mb6\" (UniqueName: \"kubernetes.io/projected/96f965dc-1e6a-477d-84d7-1c6a0c66d940-kube-api-access-w5mb6\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074896 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-logs\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074933 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-config-data\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074960 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz6c2\" (UniqueName: \"kubernetes.io/projected/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-kube-api-access-qz6c2\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074986 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.075031 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.075272 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96f965dc-1e6a-477d-84d7-1c6a0c66d940-logs\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.079465 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-config-data\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.079969 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-config-data\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.080072 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.088705 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.088875 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.092962 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz6c2\" (UniqueName: \"kubernetes.io/projected/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-kube-api-access-qz6c2\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.094146 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5mb6\" (UniqueName: \"kubernetes.io/projected/96f965dc-1e6a-477d-84d7-1c6a0c66d940-kube-api-access-w5mb6\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.155101 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.164949 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.175949 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c28bf\" (UniqueName: \"kubernetes.io/projected/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-kube-api-access-c28bf\") pod \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.176000 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-nb\") pod \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.176104 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-config\") pod \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.176141 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-dns-svc\") pod \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.176352 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-sb\") pod \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.183255 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-kube-api-access-c28bf" (OuterVolumeSpecName: "kube-api-access-c28bf") pod "5b6b45ed-f167-4479-8f6c-f0e2aa72b046" (UID: "5b6b45ed-f167-4479-8f6c-f0e2aa72b046"). InnerVolumeSpecName "kube-api-access-c28bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.234885 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5b6b45ed-f167-4479-8f6c-f0e2aa72b046" (UID: "5b6b45ed-f167-4479-8f6c-f0e2aa72b046"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.239731 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-config" (OuterVolumeSpecName: "config") pod "5b6b45ed-f167-4479-8f6c-f0e2aa72b046" (UID: "5b6b45ed-f167-4479-8f6c-f0e2aa72b046"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.247524 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5b6b45ed-f167-4479-8f6c-f0e2aa72b046" (UID: "5b6b45ed-f167-4479-8f6c-f0e2aa72b046"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.247827 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5b6b45ed-f167-4479-8f6c-f0e2aa72b046" (UID: "5b6b45ed-f167-4479-8f6c-f0e2aa72b046"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.281615 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c28bf\" (UniqueName: \"kubernetes.io/projected/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-kube-api-access-c28bf\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.281654 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.281667 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.281680 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.281691 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.668756 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.680810 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bef3408d-c90c-48d8-85fa-366e68d6e66d","Type":"ContainerStarted","Data":"583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47"} Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.681044 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bef3408d-c90c-48d8-85fa-366e68d6e66d","Type":"ContainerStarted","Data":"2a0a262a8895e6cca872ab1c86adcc262df33c931a7351df26a0b7545670d96f"} Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.681446 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.682206 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.691980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9","Type":"ContainerStarted","Data":"0194164aec38f71af5721cfb64a867f87d1f6dc4cae02e011dfb17e92fdf75d8"} Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.697503 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.727058 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.7270344509999997 podStartE2EDuration="2.727034451s" podCreationTimestamp="2026-02-21 08:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:17.69592376 +0000 UTC m=+5652.729007958" watchObservedRunningTime="2026-02-21 08:21:17.727034451 +0000 UTC m=+5652.760118649" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.734359 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" path="/var/lib/kubelet/pods/8123556f-a4ef-4790-ba20-d4b536407aa4/volumes" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.734983 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" path="/var/lib/kubelet/pods/ab47881d-31b3-45fa-bc72-fce64a00567c/volumes" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.736218 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" event={"ID":"5b6b45ed-f167-4479-8f6c-f0e2aa72b046","Type":"ContainerDied","Data":"e9e0ecab29aed0ecb81b655dc50c26ef2c09f8bf912783336d03514cdc73e15c"} Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.736282 4820 scope.go:117] "RemoveContainer" containerID="208d3681faccb269d263339aeb15942d8136498788c9e7df32c0db9f8d79e526" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.738580 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf64f4875-cnv6v"] Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.745798 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bf64f4875-cnv6v"] Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.758711 4820 scope.go:117] "RemoveContainer" containerID="700487684b5f87fbcc92aad3f9b93678a16e6a2aeaee18e715699139b2b75390" Feb 21 08:21:18 crc kubenswrapper[4820]: I0221 08:21:18.707865 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f965dc-1e6a-477d-84d7-1c6a0c66d940","Type":"ContainerStarted","Data":"ffd6e0717429942441d6739f7446e83992338ace5a92acdad1687015e926114e"} Feb 21 08:21:18 crc kubenswrapper[4820]: I0221 08:21:18.709320 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f965dc-1e6a-477d-84d7-1c6a0c66d940","Type":"ContainerStarted","Data":"8b6311f31356ce76831ef1e643a71519f1d4135a662667153af1b1ec2bf2c1c0"} Feb 21 08:21:18 crc kubenswrapper[4820]: I0221 08:21:18.709435 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f965dc-1e6a-477d-84d7-1c6a0c66d940","Type":"ContainerStarted","Data":"9adb5f8a0edceaa45f951fcc6419526dc2350dde2023d805cc2da22c4fe36495"} Feb 21 08:21:18 crc kubenswrapper[4820]: I0221 08:21:18.710268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9","Type":"ContainerStarted","Data":"eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636"} Feb 21 08:21:18 crc kubenswrapper[4820]: I0221 08:21:18.710298 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9","Type":"ContainerStarted","Data":"2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8"} Feb 21 08:21:18 crc kubenswrapper[4820]: I0221 08:21:18.728232 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.72821092 podStartE2EDuration="2.72821092s" podCreationTimestamp="2026-02-21 08:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:18.725851707 +0000 UTC m=+5653.758935905" watchObservedRunningTime="2026-02-21 08:21:18.72821092 +0000 UTC m=+5653.761295118" Feb 21 08:21:18 crc kubenswrapper[4820]: I0221 08:21:18.748767 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.748747186 podStartE2EDuration="2.748747186s" podCreationTimestamp="2026-02-21 08:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:18.746634038 +0000 UTC m=+5653.779718226" watchObservedRunningTime="2026-02-21 08:21:18.748747186 +0000 UTC m=+5653.781831384" Feb 21 08:21:19 crc kubenswrapper[4820]: I0221 08:21:19.707564 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" path="/var/lib/kubelet/pods/5b6b45ed-f167-4479-8f6c-f0e2aa72b046/volumes" Feb 21 08:21:22 crc kubenswrapper[4820]: I0221 08:21:22.165812 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 08:21:22 crc kubenswrapper[4820]: I0221 08:21:22.166195 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 08:21:26 crc kubenswrapper[4820]: I0221 08:21:26.393280 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:27 crc kubenswrapper[4820]: I0221 08:21:27.012631 4820 scope.go:117] "RemoveContainer" containerID="07d05dac62d0d1c533879d6419da2299dd9fef179fec90922352947180eea373" Feb 21 08:21:27 crc kubenswrapper[4820]: I0221 08:21:27.053142 4820 scope.go:117] "RemoveContainer" containerID="ad8c79ff3c8cfe106b6b55f544a31e4702e2207d0c03fa3122046a370bf5ac97" Feb 21 08:21:27 crc kubenswrapper[4820]: I0221 08:21:27.091633 4820 scope.go:117] "RemoveContainer" containerID="afe15da7c9744a1622ba946b0a8f2cad964248c6e6556d307d9afb8803cea6fb" Feb 21 08:21:27 crc kubenswrapper[4820]: I0221 08:21:27.156299 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 08:21:27 crc kubenswrapper[4820]: I0221 08:21:27.156359 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 08:21:27 crc kubenswrapper[4820]: I0221 08:21:27.165199 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 08:21:27 crc kubenswrapper[4820]: I0221 08:21:27.165260 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 08:21:28 crc kubenswrapper[4820]: I0221 08:21:28.238915 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:28 crc kubenswrapper[4820]: I0221 08:21:28.261553 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:28 crc kubenswrapper[4820]: I0221 08:21:28.261875 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:28 crc kubenswrapper[4820]: I0221 08:21:28.262028 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:30 crc kubenswrapper[4820]: I0221 08:21:30.030649 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cf89p"] Feb 21 08:21:30 crc kubenswrapper[4820]: I0221 08:21:30.040988 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cf89p"] Feb 21 08:21:31 crc kubenswrapper[4820]: I0221 08:21:31.714307 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85662cfe-6ca0-41d0-8858-4e63cd77f3c6" path="/var/lib/kubelet/pods/85662cfe-6ca0-41d0-8858-4e63cd77f3c6/volumes" Feb 21 08:21:38 crc kubenswrapper[4820]: I0221 08:21:38.237427 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:38 crc kubenswrapper[4820]: I0221 08:21:38.247398 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:38 crc kubenswrapper[4820]: I0221 08:21:38.247730 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:38 crc kubenswrapper[4820]: I0221 08:21:38.248002 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:40 crc kubenswrapper[4820]: I0221 08:21:40.981610 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.082267 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-combined-ca-bundle\") pod \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.082395 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-config-data\") pod \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.082499 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwsmg\" (UniqueName: \"kubernetes.io/projected/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-kube-api-access-vwsmg\") pod \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.088395 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-kube-api-access-vwsmg" (OuterVolumeSpecName: "kube-api-access-vwsmg") pod "b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" (UID: "b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f"). InnerVolumeSpecName "kube-api-access-vwsmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.109421 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-config-data" (OuterVolumeSpecName: "config-data") pod "b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" (UID: "b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.112937 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" (UID: "b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.185536 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.185573 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwsmg\" (UniqueName: \"kubernetes.io/projected/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-kube-api-access-vwsmg\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.185588 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.230345 4820 generic.go:334] "Generic (PLEG): container finished" podID="b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" containerID="fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755" exitCode=137 Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.230391 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.230383 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f","Type":"ContainerDied","Data":"fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755"} Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.230439 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f","Type":"ContainerDied","Data":"99c4d061985b5004dc504e31adc8b10c206eef34bacb665b33bf678fab276fd0"} Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.230458 4820 scope.go:117] "RemoveContainer" containerID="fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.251982 4820 scope.go:117] "RemoveContainer" containerID="fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755" Feb 21 08:21:41 crc kubenswrapper[4820]: E0221 08:21:41.252498 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755\": container with ID starting with fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755 not found: ID does not exist" containerID="fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.252555 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755"} err="failed to get container status \"fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755\": rpc error: code = NotFound desc = could not find container \"fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755\": container with ID starting with fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755 not found: ID does not exist" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.272120 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.285196 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.300700 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:41 crc kubenswrapper[4820]: E0221 08:21:41.301151 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.301168 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 08:21:41 crc kubenswrapper[4820]: E0221 08:21:41.301196 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerName="dnsmasq-dns" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.301203 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerName="dnsmasq-dns" Feb 21 08:21:41 crc kubenswrapper[4820]: E0221 08:21:41.301220 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerName="init" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.301225 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerName="init" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.301415 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerName="dnsmasq-dns" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.301426 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.302064 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.305370 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.306102 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.306537 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.313583 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.388570 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.388640 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zgbv\" (UniqueName: \"kubernetes.io/projected/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-kube-api-access-9zgbv\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.388678 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.389081 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.389174 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.491392 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.491442 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.491485 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.491527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zgbv\" (UniqueName: \"kubernetes.io/projected/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-kube-api-access-9zgbv\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.491573 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.502422 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.503133 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.504124 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.504151 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.512967 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zgbv\" (UniqueName: \"kubernetes.io/projected/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-kube-api-access-9zgbv\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.624178 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.756383 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" path="/var/lib/kubelet/pods/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f/volumes" Feb 21 08:21:42 crc kubenswrapper[4820]: I0221 08:21:42.068972 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:42 crc kubenswrapper[4820]: W0221 08:21:42.082957 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1fd00e_e5fe_4977_91db_dc6b86e63e34.slice/crio-ba1faec9c41cee6470e4a7b3c9f46aeb303f6b8cec0384e79e4213df23691c36 WatchSource:0}: Error finding container ba1faec9c41cee6470e4a7b3c9f46aeb303f6b8cec0384e79e4213df23691c36: Status 404 returned error can't find the container with id ba1faec9c41cee6470e4a7b3c9f46aeb303f6b8cec0384e79e4213df23691c36 Feb 21 08:21:42 crc kubenswrapper[4820]: I0221 08:21:42.242279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fb1fd00e-e5fe-4977-91db-dc6b86e63e34","Type":"ContainerStarted","Data":"ba1faec9c41cee6470e4a7b3c9f46aeb303f6b8cec0384e79e4213df23691c36"} Feb 21 08:21:43 crc kubenswrapper[4820]: I0221 08:21:43.251528 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fb1fd00e-e5fe-4977-91db-dc6b86e63e34","Type":"ContainerStarted","Data":"b936b9bdaab59856d96caaba8479e7d2418e52a07686c7670c43622af2c41862"} Feb 21 08:21:43 crc kubenswrapper[4820]: I0221 08:21:43.266110 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.2660923840000002 podStartE2EDuration="2.266092384s" podCreationTimestamp="2026-02-21 08:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:43.264382818 +0000 UTC m=+5678.297467016" watchObservedRunningTime="2026-02-21 08:21:43.266092384 +0000 UTC m=+5678.299176582" Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.269120 4820 generic.go:334] "Generic (PLEG): container finished" podID="5a472f5c-b752-4dc8-84da-8a5801397ff8" containerID="bfdc5163d6fe6d6b59a7132c9f1e428154dc6cdcc364550f86f8dc3503c6792e" exitCode=137 Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.269199 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5a472f5c-b752-4dc8-84da-8a5801397ff8","Type":"ContainerDied","Data":"bfdc5163d6fe6d6b59a7132c9f1e428154dc6cdcc364550f86f8dc3503c6792e"} Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.345155 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.470518 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-combined-ca-bundle\") pod \"5a472f5c-b752-4dc8-84da-8a5801397ff8\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.470651 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-config-data\") pod \"5a472f5c-b752-4dc8-84da-8a5801397ff8\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.470862 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh8qs\" (UniqueName: \"kubernetes.io/projected/5a472f5c-b752-4dc8-84da-8a5801397ff8-kube-api-access-wh8qs\") pod \"5a472f5c-b752-4dc8-84da-8a5801397ff8\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.475619 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a472f5c-b752-4dc8-84da-8a5801397ff8-kube-api-access-wh8qs" (OuterVolumeSpecName: "kube-api-access-wh8qs") pod "5a472f5c-b752-4dc8-84da-8a5801397ff8" (UID: "5a472f5c-b752-4dc8-84da-8a5801397ff8"). InnerVolumeSpecName "kube-api-access-wh8qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.495517 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a472f5c-b752-4dc8-84da-8a5801397ff8" (UID: "5a472f5c-b752-4dc8-84da-8a5801397ff8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.496119 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-config-data" (OuterVolumeSpecName: "config-data") pod "5a472f5c-b752-4dc8-84da-8a5801397ff8" (UID: "5a472f5c-b752-4dc8-84da-8a5801397ff8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.573887 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.573933 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.573946 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh8qs\" (UniqueName: \"kubernetes.io/projected/5a472f5c-b752-4dc8-84da-8a5801397ff8-kube-api-access-wh8qs\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.283357 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5a472f5c-b752-4dc8-84da-8a5801397ff8","Type":"ContainerDied","Data":"5caa9a6ee200ac7417238c6c1cc223745c163ea2c319bd460f9791be7f091ca4"} Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.283429 4820 scope.go:117] "RemoveContainer" containerID="bfdc5163d6fe6d6b59a7132c9f1e428154dc6cdcc364550f86f8dc3503c6792e" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.283627 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.316624 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.333071 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.345503 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:46 crc kubenswrapper[4820]: E0221 08:21:46.346232 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a472f5c-b752-4dc8-84da-8a5801397ff8" containerName="nova-scheduler-scheduler" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.346315 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a472f5c-b752-4dc8-84da-8a5801397ff8" containerName="nova-scheduler-scheduler" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.346650 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a472f5c-b752-4dc8-84da-8a5801397ff8" containerName="nova-scheduler-scheduler" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.347919 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.352845 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.355656 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.489773 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.489930 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc9qq\" (UniqueName: \"kubernetes.io/projected/364f6af1-6c1b-4156-bc9b-de0229e0a315-kube-api-access-sc9qq\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.490035 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-config-data\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.592029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc9qq\" (UniqueName: \"kubernetes.io/projected/364f6af1-6c1b-4156-bc9b-de0229e0a315-kube-api-access-sc9qq\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.592416 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-config-data\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.593194 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.605053 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-config-data\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.605127 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.607708 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc9qq\" (UniqueName: \"kubernetes.io/projected/364f6af1-6c1b-4156-bc9b-de0229e0a315-kube-api-access-sc9qq\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.624966 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.677490 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:21:47 crc kubenswrapper[4820]: I0221 08:21:47.101021 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:47 crc kubenswrapper[4820]: I0221 08:21:47.155344 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 08:21:47 crc kubenswrapper[4820]: I0221 08:21:47.155385 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 08:21:47 crc kubenswrapper[4820]: I0221 08:21:47.294290 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"364f6af1-6c1b-4156-bc9b-de0229e0a315","Type":"ContainerStarted","Data":"11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb"} Feb 21 08:21:47 crc kubenswrapper[4820]: I0221 08:21:47.295686 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"364f6af1-6c1b-4156-bc9b-de0229e0a315","Type":"ContainerStarted","Data":"c17979fce8baa6d02445439110a5c3b9be8bcec098230906e260f5f9059c0387"} Feb 21 08:21:47 crc kubenswrapper[4820]: I0221 08:21:47.318914 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.3188957700000001 podStartE2EDuration="1.31889577s" podCreationTimestamp="2026-02-21 08:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:47.308012136 +0000 UTC m=+5682.341096334" watchObservedRunningTime="2026-02-21 08:21:47.31889577 +0000 UTC m=+5682.351979968" Feb 21 08:21:47 crc kubenswrapper[4820]: I0221 08:21:47.710024 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a472f5c-b752-4dc8-84da-8a5801397ff8" path="/var/lib/kubelet/pods/5a472f5c-b752-4dc8-84da-8a5801397ff8/volumes" Feb 21 08:21:48 crc kubenswrapper[4820]: I0221 08:21:48.247477 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:48 crc kubenswrapper[4820]: I0221 08:21:48.247795 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:48 crc kubenswrapper[4820]: I0221 08:21:48.248419 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:48 crc kubenswrapper[4820]: I0221 08:21:48.248547 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:51 crc kubenswrapper[4820]: I0221 08:21:51.624472 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:51 crc kubenswrapper[4820]: I0221 08:21:51.646605 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:51 crc kubenswrapper[4820]: I0221 08:21:51.678444 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.365177 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.552190 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5x9p7"] Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.554738 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.560730 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.561671 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.563205 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5x9p7"] Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.602983 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tdlt\" (UniqueName: \"kubernetes.io/projected/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-kube-api-access-7tdlt\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.603215 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-config-data\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.603436 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.603550 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-scripts\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.706029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.706113 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-scripts\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.706219 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tdlt\" (UniqueName: \"kubernetes.io/projected/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-kube-api-access-7tdlt\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.706261 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-config-data\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.711730 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-scripts\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.715932 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.717431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-config-data\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.722696 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tdlt\" (UniqueName: \"kubernetes.io/projected/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-kube-api-access-7tdlt\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.878442 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:53 crc kubenswrapper[4820]: I0221 08:21:53.381388 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5x9p7"] Feb 21 08:21:54 crc kubenswrapper[4820]: I0221 08:21:54.361182 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5x9p7" event={"ID":"f525d5cb-a9d6-4121-bf15-1e7af7974e4f","Type":"ContainerStarted","Data":"34a4e1cb1b83b0c97801cf2ba65b4150edc304d737f6d6fdb49f999d85a21849"} Feb 21 08:21:54 crc kubenswrapper[4820]: I0221 08:21:54.361957 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5x9p7" event={"ID":"f525d5cb-a9d6-4121-bf15-1e7af7974e4f","Type":"ContainerStarted","Data":"9515ab2db7e805d98918309a27175f89bf66262d112cd9db60d137e174678e81"} Feb 21 08:21:54 crc kubenswrapper[4820]: I0221 08:21:54.377454 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5x9p7" podStartSLOduration=2.37740653 podStartE2EDuration="2.37740653s" podCreationTimestamp="2026-02-21 08:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:54.375822807 +0000 UTC m=+5689.408907025" watchObservedRunningTime="2026-02-21 08:21:54.37740653 +0000 UTC m=+5689.410490748" Feb 21 08:21:56 crc kubenswrapper[4820]: I0221 08:21:56.677985 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 21 08:21:56 crc kubenswrapper[4820]: I0221 08:21:56.713445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 21 08:21:57 crc kubenswrapper[4820]: I0221 08:21:57.422877 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 21 08:21:58 crc kubenswrapper[4820]: I0221 08:21:58.237514 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:58 crc kubenswrapper[4820]: I0221 08:21:58.245629 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:58 crc kubenswrapper[4820]: I0221 08:21:58.245934 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:58 crc kubenswrapper[4820]: I0221 08:21:58.246087 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:59 crc kubenswrapper[4820]: I0221 08:21:59.408137 4820 generic.go:334] "Generic (PLEG): container finished" podID="f525d5cb-a9d6-4121-bf15-1e7af7974e4f" containerID="34a4e1cb1b83b0c97801cf2ba65b4150edc304d737f6d6fdb49f999d85a21849" exitCode=0 Feb 21 08:21:59 crc kubenswrapper[4820]: I0221 08:21:59.408225 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5x9p7" event={"ID":"f525d5cb-a9d6-4121-bf15-1e7af7974e4f","Type":"ContainerDied","Data":"34a4e1cb1b83b0c97801cf2ba65b4150edc304d737f6d6fdb49f999d85a21849"} Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.681832 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.775513 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-combined-ca-bundle\") pod \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.775856 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tdlt\" (UniqueName: \"kubernetes.io/projected/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-kube-api-access-7tdlt\") pod \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.775896 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-config-data\") pod \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.775963 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-scripts\") pod \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.780528 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-scripts" (OuterVolumeSpecName: "scripts") pod "f525d5cb-a9d6-4121-bf15-1e7af7974e4f" (UID: "f525d5cb-a9d6-4121-bf15-1e7af7974e4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.781442 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-kube-api-access-7tdlt" (OuterVolumeSpecName: "kube-api-access-7tdlt") pod "f525d5cb-a9d6-4121-bf15-1e7af7974e4f" (UID: "f525d5cb-a9d6-4121-bf15-1e7af7974e4f"). InnerVolumeSpecName "kube-api-access-7tdlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.806359 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-config-data" (OuterVolumeSpecName: "config-data") pod "f525d5cb-a9d6-4121-bf15-1e7af7974e4f" (UID: "f525d5cb-a9d6-4121-bf15-1e7af7974e4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.809144 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f525d5cb-a9d6-4121-bf15-1e7af7974e4f" (UID: "f525d5cb-a9d6-4121-bf15-1e7af7974e4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.878086 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.878128 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.878143 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tdlt\" (UniqueName: \"kubernetes.io/projected/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-kube-api-access-7tdlt\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.878154 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.426023 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5x9p7" event={"ID":"f525d5cb-a9d6-4121-bf15-1e7af7974e4f","Type":"ContainerDied","Data":"9515ab2db7e805d98918309a27175f89bf66262d112cd9db60d137e174678e81"} Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.426066 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9515ab2db7e805d98918309a27175f89bf66262d112cd9db60d137e174678e81" Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.426096 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.600082 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.600450 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" containerID="cri-o://2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8" gracePeriod=30 Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.600495 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" containerID="cri-o://eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636" gracePeriod=30 Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.610601 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.611022 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" containerID="cri-o://11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" gracePeriod=30 Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.662621 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.662857 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" containerID="cri-o://8b6311f31356ce76831ef1e643a71519f1d4135a662667153af1b1ec2bf2c1c0" gracePeriod=30 Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.662902 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" containerID="cri-o://ffd6e0717429942441d6739f7446e83992338ace5a92acdad1687015e926114e" gracePeriod=30 Feb 21 08:22:01 crc kubenswrapper[4820]: E0221 08:22:01.681555 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:01 crc kubenswrapper[4820]: E0221 08:22:01.687800 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:01 crc kubenswrapper[4820]: E0221 08:22:01.690177 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:01 crc kubenswrapper[4820]: E0221 08:22:01.690265 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:02 crc kubenswrapper[4820]: I0221 08:22:02.435742 4820 generic.go:334] "Generic (PLEG): container finished" podID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerID="8b6311f31356ce76831ef1e643a71519f1d4135a662667153af1b1ec2bf2c1c0" exitCode=143 Feb 21 08:22:02 crc kubenswrapper[4820]: I0221 08:22:02.435813 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f965dc-1e6a-477d-84d7-1c6a0c66d940","Type":"ContainerDied","Data":"8b6311f31356ce76831ef1e643a71519f1d4135a662667153af1b1ec2bf2c1c0"} Feb 21 08:22:02 crc kubenswrapper[4820]: I0221 08:22:02.437969 4820 generic.go:334] "Generic (PLEG): container finished" podID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerID="2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8" exitCode=143 Feb 21 08:22:02 crc kubenswrapper[4820]: I0221 08:22:02.437995 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9","Type":"ContainerDied","Data":"2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8"} Feb 21 08:22:06 crc kubenswrapper[4820]: E0221 08:22:06.680267 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:06 crc kubenswrapper[4820]: E0221 08:22:06.682773 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:06 crc kubenswrapper[4820]: E0221 08:22:06.684210 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:06 crc kubenswrapper[4820]: E0221 08:22:06.684266 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:11 crc kubenswrapper[4820]: E0221 08:22:11.679682 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:11 crc kubenswrapper[4820]: E0221 08:22:11.681263 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:11 crc kubenswrapper[4820]: E0221 08:22:11.682484 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:11 crc kubenswrapper[4820]: E0221 08:22:11.682525 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.449371 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.545294 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-config-data\") pod \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.545357 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-logs\") pod \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.545407 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz6c2\" (UniqueName: \"kubernetes.io/projected/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-kube-api-access-qz6c2\") pod \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.545516 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-combined-ca-bundle\") pod \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.545855 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-logs" (OuterVolumeSpecName: "logs") pod "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" (UID: "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.545957 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.558532 4820 generic.go:334] "Generic (PLEG): container finished" podID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerID="ffd6e0717429942441d6739f7446e83992338ace5a92acdad1687015e926114e" exitCode=0 Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.558649 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f965dc-1e6a-477d-84d7-1c6a0c66d940","Type":"ContainerDied","Data":"ffd6e0717429942441d6739f7446e83992338ace5a92acdad1687015e926114e"} Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.558683 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f965dc-1e6a-477d-84d7-1c6a0c66d940","Type":"ContainerDied","Data":"9adb5f8a0edceaa45f951fcc6419526dc2350dde2023d805cc2da22c4fe36495"} Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.558696 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9adb5f8a0edceaa45f951fcc6419526dc2350dde2023d805cc2da22c4fe36495" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.571451 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-kube-api-access-qz6c2" (OuterVolumeSpecName: "kube-api-access-qz6c2") pod "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" (UID: "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9"). InnerVolumeSpecName "kube-api-access-qz6c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.580396 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-config-data" (OuterVolumeSpecName: "config-data") pod "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" (UID: "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.582447 4820 generic.go:334] "Generic (PLEG): container finished" podID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerID="eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636" exitCode=0 Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.582491 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9","Type":"ContainerDied","Data":"eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636"} Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.582524 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9","Type":"ContainerDied","Data":"0194164aec38f71af5721cfb64a867f87d1f6dc4cae02e011dfb17e92fdf75d8"} Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.582544 4820 scope.go:117] "RemoveContainer" containerID="eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.582706 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.602006 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.621459 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" (UID: "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.625875 4820 scope.go:117] "RemoveContainer" containerID="2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647033 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-nova-metadata-tls-certs\") pod \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5mb6\" (UniqueName: \"kubernetes.io/projected/96f965dc-1e6a-477d-84d7-1c6a0c66d940-kube-api-access-w5mb6\") pod \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647336 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-combined-ca-bundle\") pod \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647390 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-config-data\") pod \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647440 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96f965dc-1e6a-477d-84d7-1c6a0c66d940-logs\") pod \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647956 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647977 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647989 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz6c2\" (UniqueName: \"kubernetes.io/projected/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-kube-api-access-qz6c2\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.656053 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96f965dc-1e6a-477d-84d7-1c6a0c66d940-logs" (OuterVolumeSpecName: "logs") pod "96f965dc-1e6a-477d-84d7-1c6a0c66d940" (UID: "96f965dc-1e6a-477d-84d7-1c6a0c66d940"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.661444 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96f965dc-1e6a-477d-84d7-1c6a0c66d940-kube-api-access-w5mb6" (OuterVolumeSpecName: "kube-api-access-w5mb6") pod "96f965dc-1e6a-477d-84d7-1c6a0c66d940" (UID: "96f965dc-1e6a-477d-84d7-1c6a0c66d940"). InnerVolumeSpecName "kube-api-access-w5mb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.711443 4820 scope.go:117] "RemoveContainer" containerID="eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636" Feb 21 08:22:15 crc kubenswrapper[4820]: E0221 08:22:15.717364 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636\": container with ID starting with eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636 not found: ID does not exist" containerID="eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.729435 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-config-data" (OuterVolumeSpecName: "config-data") pod "96f965dc-1e6a-477d-84d7-1c6a0c66d940" (UID: "96f965dc-1e6a-477d-84d7-1c6a0c66d940"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.722502 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636"} err="failed to get container status \"eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636\": rpc error: code = NotFound desc = could not find container \"eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636\": container with ID starting with eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636 not found: ID does not exist" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.729953 4820 scope.go:117] "RemoveContainer" containerID="2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8" Feb 21 08:22:15 crc kubenswrapper[4820]: E0221 08:22:15.731157 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8\": container with ID starting with 2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8 not found: ID does not exist" containerID="2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.731194 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8"} err="failed to get container status \"2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8\": rpc error: code = NotFound desc = could not find container \"2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8\": container with ID starting with 2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8 not found: ID does not exist" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.739641 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96f965dc-1e6a-477d-84d7-1c6a0c66d940" (UID: "96f965dc-1e6a-477d-84d7-1c6a0c66d940"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.765757 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5mb6\" (UniqueName: \"kubernetes.io/projected/96f965dc-1e6a-477d-84d7-1c6a0c66d940-kube-api-access-w5mb6\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.765788 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.765830 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.765841 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96f965dc-1e6a-477d-84d7-1c6a0c66d940-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.807566 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "96f965dc-1e6a-477d-84d7-1c6a0c66d940" (UID: "96f965dc-1e6a-477d-84d7-1c6a0c66d940"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.872763 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.908542 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.920509 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.936537 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:15 crc kubenswrapper[4820]: E0221 08:22:15.936962 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.936980 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" Feb 21 08:22:15 crc kubenswrapper[4820]: E0221 08:22:15.936990 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.936997 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" Feb 21 08:22:15 crc kubenswrapper[4820]: E0221 08:22:15.937013 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937019 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" Feb 21 08:22:15 crc kubenswrapper[4820]: E0221 08:22:15.937043 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937050 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" Feb 21 08:22:15 crc kubenswrapper[4820]: E0221 08:22:15.937059 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f525d5cb-a9d6-4121-bf15-1e7af7974e4f" containerName="nova-manage" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937065 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f525d5cb-a9d6-4121-bf15-1e7af7974e4f" containerName="nova-manage" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937226 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937254 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937265 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937277 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937285 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f525d5cb-a9d6-4121-bf15-1e7af7974e4f" containerName="nova-manage" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.938283 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.943124 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.947973 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.075523 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64e0ba1-9522-4546-b79e-1ac9cb43f135-logs\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.077156 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-config-data\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.077357 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.077637 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tglwj\" (UniqueName: \"kubernetes.io/projected/a64e0ba1-9522-4546-b79e-1ac9cb43f135-kube-api-access-tglwj\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.179703 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-config-data\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.179793 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.179879 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tglwj\" (UniqueName: \"kubernetes.io/projected/a64e0ba1-9522-4546-b79e-1ac9cb43f135-kube-api-access-tglwj\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.179919 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64e0ba1-9522-4546-b79e-1ac9cb43f135-logs\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.180358 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64e0ba1-9522-4546-b79e-1ac9cb43f135-logs\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.183447 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.185527 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-config-data\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.195648 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tglwj\" (UniqueName: \"kubernetes.io/projected/a64e0ba1-9522-4546-b79e-1ac9cb43f135-kube-api-access-tglwj\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.255193 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.591896 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.621039 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.629415 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.646533 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.649091 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.651832 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.652565 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.657770 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:22:16 crc kubenswrapper[4820]: E0221 08:22:16.679832 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:16 crc kubenswrapper[4820]: E0221 08:22:16.681226 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:16 crc kubenswrapper[4820]: E0221 08:22:16.683677 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:16 crc kubenswrapper[4820]: E0221 08:22:16.683779 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.707295 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.790477 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.790573 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57j5w\" (UniqueName: \"kubernetes.io/projected/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-kube-api-access-57j5w\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.790678 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-config-data\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.790838 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-logs\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.790862 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.892420 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57j5w\" (UniqueName: \"kubernetes.io/projected/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-kube-api-access-57j5w\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.892914 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-config-data\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.893022 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-logs\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.893040 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.893500 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-logs\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.893572 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.896688 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.897120 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.897660 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-config-data\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.916800 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57j5w\" (UniqueName: \"kubernetes.io/projected/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-kube-api-access-57j5w\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.973051 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.424351 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:22:17 crc kubenswrapper[4820]: W0221 08:22:17.433568 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a5efcf2_dfdc_4c49_85f1_ccbd24edaebf.slice/crio-693d0232eb9d3d5e0ecbe5f8fe7211549dd820f567608a73163321471ff6aae0 WatchSource:0}: Error finding container 693d0232eb9d3d5e0ecbe5f8fe7211549dd820f567608a73163321471ff6aae0: Status 404 returned error can't find the container with id 693d0232eb9d3d5e0ecbe5f8fe7211549dd820f567608a73163321471ff6aae0 Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.602717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a64e0ba1-9522-4546-b79e-1ac9cb43f135","Type":"ContainerStarted","Data":"07846b9b8a0af02e22835278a21ee54b1ed2eb3d88333e73ddd8b9f9ca50f1d0"} Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.603617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a64e0ba1-9522-4546-b79e-1ac9cb43f135","Type":"ContainerStarted","Data":"8c20503b1f1242e7aa9b4faff059a037b8e4b39bc2ef0ade33567969965f1be0"} Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.603717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a64e0ba1-9522-4546-b79e-1ac9cb43f135","Type":"ContainerStarted","Data":"b77e775ea781ef6a5ae70de88c34a44655e28ba0dca43107d59484ae85245930"} Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.606285 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf","Type":"ContainerStarted","Data":"693d0232eb9d3d5e0ecbe5f8fe7211549dd820f567608a73163321471ff6aae0"} Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.622498 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.622474816 podStartE2EDuration="2.622474816s" podCreationTimestamp="2026-02-21 08:22:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:22:17.617880872 +0000 UTC m=+5712.650965070" watchObservedRunningTime="2026-02-21 08:22:17.622474816 +0000 UTC m=+5712.655559014" Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.718083 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" path="/var/lib/kubelet/pods/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9/volumes" Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.719364 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" path="/var/lib/kubelet/pods/96f965dc-1e6a-477d-84d7-1c6a0c66d940/volumes" Feb 21 08:22:18 crc kubenswrapper[4820]: I0221 08:22:18.616489 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf","Type":"ContainerStarted","Data":"3bc8a51d89a75337ed95a4da428a2c5cd89eada5282bff5c15d37e08160dc6cd"} Feb 21 08:22:18 crc kubenswrapper[4820]: I0221 08:22:18.616529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf","Type":"ContainerStarted","Data":"f54650c953f71352ebf3663fefc2c46a1224cdbd7d75aace44661c3d5cae2261"} Feb 21 08:22:18 crc kubenswrapper[4820]: I0221 08:22:18.652573 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.652551007 podStartE2EDuration="2.652551007s" podCreationTimestamp="2026-02-21 08:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:22:18.646561925 +0000 UTC m=+5713.679646123" watchObservedRunningTime="2026-02-21 08:22:18.652551007 +0000 UTC m=+5713.685635205" Feb 21 08:22:21 crc kubenswrapper[4820]: E0221 08:22:21.680365 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:21 crc kubenswrapper[4820]: E0221 08:22:21.682029 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:21 crc kubenswrapper[4820]: E0221 08:22:21.683353 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:21 crc kubenswrapper[4820]: E0221 08:22:21.683389 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:21 crc kubenswrapper[4820]: I0221 08:22:21.973438 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 08:22:21 crc kubenswrapper[4820]: I0221 08:22:21.973513 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 08:22:26 crc kubenswrapper[4820]: I0221 08:22:26.255867 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 08:22:26 crc kubenswrapper[4820]: I0221 08:22:26.257375 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 08:22:26 crc kubenswrapper[4820]: E0221 08:22:26.679957 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:26 crc kubenswrapper[4820]: E0221 08:22:26.681772 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:26 crc kubenswrapper[4820]: E0221 08:22:26.683037 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:26 crc kubenswrapper[4820]: E0221 08:22:26.683076 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:26 crc kubenswrapper[4820]: I0221 08:22:26.973129 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 08:22:26 crc kubenswrapper[4820]: I0221 08:22:26.973486 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 08:22:27 crc kubenswrapper[4820]: I0221 08:22:27.230289 4820 scope.go:117] "RemoveContainer" containerID="08029266fdbaec4768281dce6906fb8acc0183782e2aefac3bdb5346ddaafd3d" Feb 21 08:22:27 crc kubenswrapper[4820]: I0221 08:22:27.299630 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.95:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:22:27 crc kubenswrapper[4820]: I0221 08:22:27.340604 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.95:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:22:27 crc kubenswrapper[4820]: I0221 08:22:27.985407 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.96:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:22:27 crc kubenswrapper[4820]: I0221 08:22:27.985438 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.96:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:22:31 crc kubenswrapper[4820]: E0221 08:22:31.678165 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb is running failed: container process not found" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:31 crc kubenswrapper[4820]: E0221 08:22:31.679021 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb is running failed: container process not found" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:31 crc kubenswrapper[4820]: E0221 08:22:31.679414 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb is running failed: container process not found" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:31 crc kubenswrapper[4820]: E0221 08:22:31.679460 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:31 crc kubenswrapper[4820]: I0221 08:22:31.719838 4820 generic.go:334] "Generic (PLEG): container finished" podID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" exitCode=137 Feb 21 08:22:31 crc kubenswrapper[4820]: I0221 08:22:31.719884 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"364f6af1-6c1b-4156-bc9b-de0229e0a315","Type":"ContainerDied","Data":"11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb"} Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.697579 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.732000 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"364f6af1-6c1b-4156-bc9b-de0229e0a315","Type":"ContainerDied","Data":"c17979fce8baa6d02445439110a5c3b9be8bcec098230906e260f5f9059c0387"} Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.732364 4820 scope.go:117] "RemoveContainer" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.732536 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.795957 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-combined-ca-bundle\") pod \"364f6af1-6c1b-4156-bc9b-de0229e0a315\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.796258 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-config-data\") pod \"364f6af1-6c1b-4156-bc9b-de0229e0a315\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.796286 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc9qq\" (UniqueName: \"kubernetes.io/projected/364f6af1-6c1b-4156-bc9b-de0229e0a315-kube-api-access-sc9qq\") pod \"364f6af1-6c1b-4156-bc9b-de0229e0a315\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.819588 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/364f6af1-6c1b-4156-bc9b-de0229e0a315-kube-api-access-sc9qq" (OuterVolumeSpecName: "kube-api-access-sc9qq") pod "364f6af1-6c1b-4156-bc9b-de0229e0a315" (UID: "364f6af1-6c1b-4156-bc9b-de0229e0a315"). InnerVolumeSpecName "kube-api-access-sc9qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.823512 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "364f6af1-6c1b-4156-bc9b-de0229e0a315" (UID: "364f6af1-6c1b-4156-bc9b-de0229e0a315"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.835093 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-config-data" (OuterVolumeSpecName: "config-data") pod "364f6af1-6c1b-4156-bc9b-de0229e0a315" (UID: "364f6af1-6c1b-4156-bc9b-de0229e0a315"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.898328 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.898373 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc9qq\" (UniqueName: \"kubernetes.io/projected/364f6af1-6c1b-4156-bc9b-de0229e0a315-kube-api-access-sc9qq\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.898389 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.074404 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.087545 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.103913 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:22:33 crc kubenswrapper[4820]: E0221 08:22:33.104394 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.104416 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.104655 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.105528 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.108811 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.112023 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.205363 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4mt7\" (UniqueName: \"kubernetes.io/projected/475239fa-3785-4704-bef1-f554cf694456-kube-api-access-n4mt7\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.205425 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.205771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-config-data\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.308125 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4mt7\" (UniqueName: \"kubernetes.io/projected/475239fa-3785-4704-bef1-f554cf694456-kube-api-access-n4mt7\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.308196 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.308360 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-config-data\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.313431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-config-data\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.316996 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.328029 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4mt7\" (UniqueName: \"kubernetes.io/projected/475239fa-3785-4704-bef1-f554cf694456-kube-api-access-n4mt7\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.431050 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.708744 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" path="/var/lib/kubelet/pods/364f6af1-6c1b-4156-bc9b-de0229e0a315/volumes" Feb 21 08:22:33 crc kubenswrapper[4820]: W0221 08:22:33.887300 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod475239fa_3785_4704_bef1_f554cf694456.slice/crio-b9728440a68a14dc6808fd23c52f77370ca72000bc7bcb7fce2546c782ccca62 WatchSource:0}: Error finding container b9728440a68a14dc6808fd23c52f77370ca72000bc7bcb7fce2546c782ccca62: Status 404 returned error can't find the container with id b9728440a68a14dc6808fd23c52f77370ca72000bc7bcb7fce2546c782ccca62 Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.889850 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:22:34 crc kubenswrapper[4820]: I0221 08:22:34.751062 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"475239fa-3785-4704-bef1-f554cf694456","Type":"ContainerStarted","Data":"5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286"} Feb 21 08:22:34 crc kubenswrapper[4820]: I0221 08:22:34.751525 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"475239fa-3785-4704-bef1-f554cf694456","Type":"ContainerStarted","Data":"b9728440a68a14dc6808fd23c52f77370ca72000bc7bcb7fce2546c782ccca62"} Feb 21 08:22:34 crc kubenswrapper[4820]: I0221 08:22:34.779610 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.779591522 podStartE2EDuration="1.779591522s" podCreationTimestamp="2026-02-21 08:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:22:34.777403302 +0000 UTC m=+5729.810487530" watchObservedRunningTime="2026-02-21 08:22:34.779591522 +0000 UTC m=+5729.812675720" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.260271 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.262097 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.263439 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.264876 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.765544 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.769262 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.950665 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64b58db4ff-kq4r6"] Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.952561 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.979040 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-nb\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.979095 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-dns-svc\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.979140 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-sb\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.979185 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-config\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.979209 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lr4f\" (UniqueName: \"kubernetes.io/projected/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-kube-api-access-6lr4f\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.984444 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64b58db4ff-kq4r6"] Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.034483 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.036052 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.054800 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.085106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-config\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.085154 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lr4f\" (UniqueName: \"kubernetes.io/projected/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-kube-api-access-6lr4f\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.085229 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-nb\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.085276 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-dns-svc\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.085320 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-sb\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.086373 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-sb\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.087580 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-config\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.091939 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-dns-svc\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.101661 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-nb\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.122555 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lr4f\" (UniqueName: \"kubernetes.io/projected/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-kube-api-access-6lr4f\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.279552 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.859737 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.952288 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64b58db4ff-kq4r6"] Feb 21 08:22:38 crc kubenswrapper[4820]: I0221 08:22:38.431291 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 21 08:22:38 crc kubenswrapper[4820]: I0221 08:22:38.785510 4820 generic.go:334] "Generic (PLEG): container finished" podID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerID="f215d8f5dd859dfa673e3e2892b1a89b1627e9a6ac4059705534b7571162daeb" exitCode=0 Feb 21 08:22:38 crc kubenswrapper[4820]: I0221 08:22:38.785755 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" event={"ID":"d22b75bc-f9ca-4b8f-ae95-5d348d367d56","Type":"ContainerDied","Data":"f215d8f5dd859dfa673e3e2892b1a89b1627e9a6ac4059705534b7571162daeb"} Feb 21 08:22:38 crc kubenswrapper[4820]: I0221 08:22:38.785953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" event={"ID":"d22b75bc-f9ca-4b8f-ae95-5d348d367d56","Type":"ContainerStarted","Data":"d67f845d3717911b1815a01ec1fd7dc0df11dc2b02acfc8a168dc3d28d255825"} Feb 21 08:22:39 crc kubenswrapper[4820]: I0221 08:22:39.798717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" event={"ID":"d22b75bc-f9ca-4b8f-ae95-5d348d367d56","Type":"ContainerStarted","Data":"0a4720267f768f28f7e592e7fa4dcfc42e1fbbe5a9ed8b90b1f97ebb0060eaf8"} Feb 21 08:22:39 crc kubenswrapper[4820]: I0221 08:22:39.799875 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:40 crc kubenswrapper[4820]: I0221 08:22:40.325306 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" podStartSLOduration=4.325290191 podStartE2EDuration="4.325290191s" podCreationTimestamp="2026-02-21 08:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:22:39.818641033 +0000 UTC m=+5734.851725241" watchObservedRunningTime="2026-02-21 08:22:40.325290191 +0000 UTC m=+5735.358374389" Feb 21 08:22:40 crc kubenswrapper[4820]: I0221 08:22:40.334911 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:40 crc kubenswrapper[4820]: I0221 08:22:40.339342 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-log" containerID="cri-o://8c20503b1f1242e7aa9b4faff059a037b8e4b39bc2ef0ade33567969965f1be0" gracePeriod=30 Feb 21 08:22:40 crc kubenswrapper[4820]: I0221 08:22:40.339380 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-api" containerID="cri-o://07846b9b8a0af02e22835278a21ee54b1ed2eb3d88333e73ddd8b9f9ca50f1d0" gracePeriod=30 Feb 21 08:22:40 crc kubenswrapper[4820]: I0221 08:22:40.812960 4820 generic.go:334] "Generic (PLEG): container finished" podID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerID="8c20503b1f1242e7aa9b4faff059a037b8e4b39bc2ef0ade33567969965f1be0" exitCode=143 Feb 21 08:22:40 crc kubenswrapper[4820]: I0221 08:22:40.813024 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a64e0ba1-9522-4546-b79e-1ac9cb43f135","Type":"ContainerDied","Data":"8c20503b1f1242e7aa9b4faff059a037b8e4b39bc2ef0ade33567969965f1be0"} Feb 21 08:22:43 crc kubenswrapper[4820]: I0221 08:22:43.432065 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 21 08:22:43 crc kubenswrapper[4820]: I0221 08:22:43.459028 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 21 08:22:43 crc kubenswrapper[4820]: I0221 08:22:43.816488 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:22:43 crc kubenswrapper[4820]: I0221 08:22:43.816543 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:22:43 crc kubenswrapper[4820]: I0221 08:22:43.863352 4820 generic.go:334] "Generic (PLEG): container finished" podID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerID="07846b9b8a0af02e22835278a21ee54b1ed2eb3d88333e73ddd8b9f9ca50f1d0" exitCode=0 Feb 21 08:22:43 crc kubenswrapper[4820]: I0221 08:22:43.868307 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a64e0ba1-9522-4546-b79e-1ac9cb43f135","Type":"ContainerDied","Data":"07846b9b8a0af02e22835278a21ee54b1ed2eb3d88333e73ddd8b9f9ca50f1d0"} Feb 21 08:22:43 crc kubenswrapper[4820]: I0221 08:22:43.906058 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.156728 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.324103 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tglwj\" (UniqueName: \"kubernetes.io/projected/a64e0ba1-9522-4546-b79e-1ac9cb43f135-kube-api-access-tglwj\") pod \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.324281 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-config-data\") pod \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.324307 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64e0ba1-9522-4546-b79e-1ac9cb43f135-logs\") pod \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.324347 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-combined-ca-bundle\") pod \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.332586 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64e0ba1-9522-4546-b79e-1ac9cb43f135-logs" (OuterVolumeSpecName: "logs") pod "a64e0ba1-9522-4546-b79e-1ac9cb43f135" (UID: "a64e0ba1-9522-4546-b79e-1ac9cb43f135"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.333995 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64e0ba1-9522-4546-b79e-1ac9cb43f135-kube-api-access-tglwj" (OuterVolumeSpecName: "kube-api-access-tglwj") pod "a64e0ba1-9522-4546-b79e-1ac9cb43f135" (UID: "a64e0ba1-9522-4546-b79e-1ac9cb43f135"). InnerVolumeSpecName "kube-api-access-tglwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.356829 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-config-data" (OuterVolumeSpecName: "config-data") pod "a64e0ba1-9522-4546-b79e-1ac9cb43f135" (UID: "a64e0ba1-9522-4546-b79e-1ac9cb43f135"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.396363 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a64e0ba1-9522-4546-b79e-1ac9cb43f135" (UID: "a64e0ba1-9522-4546-b79e-1ac9cb43f135"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.426764 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tglwj\" (UniqueName: \"kubernetes.io/projected/a64e0ba1-9522-4546-b79e-1ac9cb43f135-kube-api-access-tglwj\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.426796 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.426806 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64e0ba1-9522-4546-b79e-1ac9cb43f135-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.426815 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.875015 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.875350 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a64e0ba1-9522-4546-b79e-1ac9cb43f135","Type":"ContainerDied","Data":"b77e775ea781ef6a5ae70de88c34a44655e28ba0dca43107d59484ae85245930"} Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.875431 4820 scope.go:117] "RemoveContainer" containerID="07846b9b8a0af02e22835278a21ee54b1ed2eb3d88333e73ddd8b9f9ca50f1d0" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.901484 4820 scope.go:117] "RemoveContainer" containerID="8c20503b1f1242e7aa9b4faff059a037b8e4b39bc2ef0ade33567969965f1be0" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.906086 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.915971 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.930634 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:44 crc kubenswrapper[4820]: E0221 08:22:44.931383 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-api" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.931505 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-api" Feb 21 08:22:44 crc kubenswrapper[4820]: E0221 08:22:44.931612 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-log" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.931684 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-log" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.931980 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-api" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.932086 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-log" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.933939 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.939029 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.939326 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.939660 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.952084 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.036920 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-config-data\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.036991 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.037113 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-internal-tls-certs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.037139 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kl5v\" (UniqueName: \"kubernetes.io/projected/febb41c5-cb59-4868-b57d-63b20b422240-kube-api-access-5kl5v\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.037176 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-public-tls-certs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.037210 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febb41c5-cb59-4868-b57d-63b20b422240-logs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.139066 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febb41c5-cb59-4868-b57d-63b20b422240-logs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.139151 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-config-data\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.139214 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.139301 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-internal-tls-certs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.139326 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kl5v\" (UniqueName: \"kubernetes.io/projected/febb41c5-cb59-4868-b57d-63b20b422240-kube-api-access-5kl5v\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.139455 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-public-tls-certs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.139742 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febb41c5-cb59-4868-b57d-63b20b422240-logs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.146907 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-internal-tls-certs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.147002 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.150102 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-config-data\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.150823 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-public-tls-certs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.171269 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kl5v\" (UniqueName: \"kubernetes.io/projected/febb41c5-cb59-4868-b57d-63b20b422240-kube-api-access-5kl5v\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.256676 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.707077 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" path="/var/lib/kubelet/pods/a64e0ba1-9522-4546-b79e-1ac9cb43f135/volumes" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.723332 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.882563 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febb41c5-cb59-4868-b57d-63b20b422240","Type":"ContainerStarted","Data":"f84f25836fa8a5c0573e20405d3a79bd27bbd629ad136467d54a559c6258e788"} Feb 21 08:22:46 crc kubenswrapper[4820]: I0221 08:22:46.894886 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febb41c5-cb59-4868-b57d-63b20b422240","Type":"ContainerStarted","Data":"9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301"} Feb 21 08:22:46 crc kubenswrapper[4820]: I0221 08:22:46.895230 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febb41c5-cb59-4868-b57d-63b20b422240","Type":"ContainerStarted","Data":"acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054"} Feb 21 08:22:46 crc kubenswrapper[4820]: I0221 08:22:46.920636 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.920616719 podStartE2EDuration="2.920616719s" podCreationTimestamp="2026-02-21 08:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:22:46.912817468 +0000 UTC m=+5741.945901666" watchObservedRunningTime="2026-02-21 08:22:46.920616719 +0000 UTC m=+5741.953700917" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.281412 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.332275 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65d6fd5f6f-tvl89"] Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.332478 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" podUID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerName="dnsmasq-dns" containerID="cri-o://51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505" gracePeriod=10 Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.874056 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.904435 4820 generic.go:334] "Generic (PLEG): container finished" podID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerID="51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505" exitCode=0 Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.904523 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" event={"ID":"543eb7a9-5b1a-407b-a035-86d3fb8bd55c","Type":"ContainerDied","Data":"51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505"} Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.904539 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.904560 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" event={"ID":"543eb7a9-5b1a-407b-a035-86d3fb8bd55c","Type":"ContainerDied","Data":"5a46ef286aad0cc12fe47e877ef7c7e453f348a471ce2d591279fe8b81e97e5d"} Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.904579 4820 scope.go:117] "RemoveContainer" containerID="51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.933478 4820 scope.go:117] "RemoveContainer" containerID="48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.955674 4820 scope.go:117] "RemoveContainer" containerID="51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505" Feb 21 08:22:47 crc kubenswrapper[4820]: E0221 08:22:47.961526 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505\": container with ID starting with 51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505 not found: ID does not exist" containerID="51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.961785 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505"} err="failed to get container status \"51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505\": rpc error: code = NotFound desc = could not find container \"51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505\": container with ID starting with 51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505 not found: ID does not exist" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.961815 4820 scope.go:117] "RemoveContainer" containerID="48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e" Feb 21 08:22:47 crc kubenswrapper[4820]: E0221 08:22:47.962425 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e\": container with ID starting with 48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e not found: ID does not exist" containerID="48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.962483 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e"} err="failed to get container status \"48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e\": rpc error: code = NotFound desc = could not find container \"48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e\": container with ID starting with 48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e not found: ID does not exist" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.996666 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-nb\") pod \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.996721 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-sb\") pod \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.996796 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4q45\" (UniqueName: \"kubernetes.io/projected/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-kube-api-access-s4q45\") pod \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.996823 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-config\") pod \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.996853 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-dns-svc\") pod \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.003508 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-kube-api-access-s4q45" (OuterVolumeSpecName: "kube-api-access-s4q45") pod "543eb7a9-5b1a-407b-a035-86d3fb8bd55c" (UID: "543eb7a9-5b1a-407b-a035-86d3fb8bd55c"). InnerVolumeSpecName "kube-api-access-s4q45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.041646 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "543eb7a9-5b1a-407b-a035-86d3fb8bd55c" (UID: "543eb7a9-5b1a-407b-a035-86d3fb8bd55c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.052798 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "543eb7a9-5b1a-407b-a035-86d3fb8bd55c" (UID: "543eb7a9-5b1a-407b-a035-86d3fb8bd55c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.066075 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-config" (OuterVolumeSpecName: "config") pod "543eb7a9-5b1a-407b-a035-86d3fb8bd55c" (UID: "543eb7a9-5b1a-407b-a035-86d3fb8bd55c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.067854 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "543eb7a9-5b1a-407b-a035-86d3fb8bd55c" (UID: "543eb7a9-5b1a-407b-a035-86d3fb8bd55c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.098642 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.098672 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.098685 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4q45\" (UniqueName: \"kubernetes.io/projected/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-kube-api-access-s4q45\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.098699 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.098710 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.237281 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65d6fd5f6f-tvl89"] Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.245218 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65d6fd5f6f-tvl89"] Feb 21 08:22:49 crc kubenswrapper[4820]: I0221 08:22:49.707338 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" path="/var/lib/kubelet/pods/543eb7a9-5b1a-407b-a035-86d3fb8bd55c/volumes" Feb 21 08:22:55 crc kubenswrapper[4820]: I0221 08:22:55.257335 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 08:22:55 crc kubenswrapper[4820]: I0221 08:22:55.257630 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 08:22:56 crc kubenswrapper[4820]: I0221 08:22:56.271443 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.99:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:22:56 crc kubenswrapper[4820]: I0221 08:22:56.271460 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.99:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:23:05 crc kubenswrapper[4820]: I0221 08:23:05.266317 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 08:23:05 crc kubenswrapper[4820]: I0221 08:23:05.267213 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 08:23:05 crc kubenswrapper[4820]: I0221 08:23:05.268024 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 08:23:05 crc kubenswrapper[4820]: I0221 08:23:05.274508 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 08:23:06 crc kubenswrapper[4820]: I0221 08:23:06.054159 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 08:23:06 crc kubenswrapper[4820]: I0221 08:23:06.064248 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 08:23:13 crc kubenswrapper[4820]: I0221 08:23:13.816206 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:23:13 crc kubenswrapper[4820]: I0221 08:23:13.816970 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.245311 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f9bd7b79c-txbkn"] Feb 21 08:23:19 crc kubenswrapper[4820]: E0221 08:23:19.246767 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerName="init" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.246790 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerName="init" Feb 21 08:23:19 crc kubenswrapper[4820]: E0221 08:23:19.246815 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerName="dnsmasq-dns" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.246822 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerName="dnsmasq-dns" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.247157 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerName="dnsmasq-dns" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.248808 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.262821 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.263081 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-n22hz" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.263215 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.263380 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.324816 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-scripts\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.345435 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-config-data\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.345832 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2164709-4568-4dea-8421-e4d863e18ac3-logs\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.346014 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2164709-4568-4dea-8421-e4d863e18ac3-horizon-secret-key\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.346206 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx6n7\" (UniqueName: \"kubernetes.io/projected/a2164709-4568-4dea-8421-e4d863e18ac3-kube-api-access-dx6n7\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.359623 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f9bd7b79c-txbkn"] Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.420101 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.421754 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-log" containerID="cri-o://3e9323b3b0ecd38f4bd6801e5bdf943a91f811adc414d781d648c705fbf53dd9" gracePeriod=30 Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.422234 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-httpd" containerID="cri-o://9c8352c44b67eda0f166f0687429790e5bd49b1d98c898e2089a6c9be067a4f4" gracePeriod=30 Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.432137 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66bd57fd8f-854qq"] Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.434638 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.444352 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.444629 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-log" containerID="cri-o://fd2dfabc6a845c58169feb78a970683856b5e0b8c05305224b62a62196765d9f" gracePeriod=30 Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.444804 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-httpd" containerID="cri-o://8384371cb1cb59ce68f65650414ed9165b7cc3f363b2fda166fcb245381ffb64" gracePeriod=30 Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.448135 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-scripts\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.448197 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-config-data\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.448255 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2164709-4568-4dea-8421-e4d863e18ac3-logs\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.448288 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2164709-4568-4dea-8421-e4d863e18ac3-horizon-secret-key\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.448329 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx6n7\" (UniqueName: \"kubernetes.io/projected/a2164709-4568-4dea-8421-e4d863e18ac3-kube-api-access-dx6n7\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.448849 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2164709-4568-4dea-8421-e4d863e18ac3-logs\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.449419 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-config-data\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.449806 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-scripts\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.482648 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2164709-4568-4dea-8421-e4d863e18ac3-horizon-secret-key\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.485096 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66bd57fd8f-854qq"] Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.493602 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx6n7\" (UniqueName: \"kubernetes.io/projected/a2164709-4568-4dea-8421-e4d863e18ac3-kube-api-access-dx6n7\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.552061 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9df49f4c-07ec-4360-88da-765b936357ad-horizon-secret-key\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.552481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df49f4c-07ec-4360-88da-765b936357ad-logs\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.552539 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-config-data\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.552580 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-scripts\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.552652 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r484k\" (UniqueName: \"kubernetes.io/projected/9df49f4c-07ec-4360-88da-765b936357ad-kube-api-access-r484k\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.597191 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.653846 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df49f4c-07ec-4360-88da-765b936357ad-logs\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.653941 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-config-data\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.653987 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-scripts\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.654062 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r484k\" (UniqueName: \"kubernetes.io/projected/9df49f4c-07ec-4360-88da-765b936357ad-kube-api-access-r484k\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.654099 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9df49f4c-07ec-4360-88da-765b936357ad-horizon-secret-key\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.656657 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-config-data\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.657095 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df49f4c-07ec-4360-88da-765b936357ad-logs\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.657683 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-scripts\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.658157 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9df49f4c-07ec-4360-88da-765b936357ad-horizon-secret-key\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.674894 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r484k\" (UniqueName: \"kubernetes.io/projected/9df49f4c-07ec-4360-88da-765b936357ad-kube-api-access-r484k\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.770108 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.079066 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.079581 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f9bd7b79c-txbkn"] Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.185114 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9bd7b79c-txbkn" event={"ID":"a2164709-4568-4dea-8421-e4d863e18ac3","Type":"ContainerStarted","Data":"4eb3448883c497758beeea4960713d6d2bc637bf465fa9c8ccfeb69d503fe899"} Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.187581 4820 generic.go:334] "Generic (PLEG): container finished" podID="57f780e9-b685-4b5b-bab3-63b31b794393" containerID="3e9323b3b0ecd38f4bd6801e5bdf943a91f811adc414d781d648c705fbf53dd9" exitCode=143 Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.187646 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57f780e9-b685-4b5b-bab3-63b31b794393","Type":"ContainerDied","Data":"3e9323b3b0ecd38f4bd6801e5bdf943a91f811adc414d781d648c705fbf53dd9"} Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.190089 4820 generic.go:334] "Generic (PLEG): container finished" podID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerID="fd2dfabc6a845c58169feb78a970683856b5e0b8c05305224b62a62196765d9f" exitCode=143 Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.190179 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b012ae7-d786-413d-82ca-88448b64b4cd","Type":"ContainerDied","Data":"fd2dfabc6a845c58169feb78a970683856b5e0b8c05305224b62a62196765d9f"} Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.262326 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66bd57fd8f-854qq"] Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.058707 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66bd57fd8f-854qq"] Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.092229 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d844c64f6-dltxp"] Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.093745 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.101064 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.111860 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d844c64f6-dltxp"] Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.171933 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f9bd7b79c-txbkn"] Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.188437 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-config-data\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.188501 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-secret-key\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.188573 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fs8j\" (UniqueName: \"kubernetes.io/projected/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-kube-api-access-8fs8j\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.188604 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-scripts\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.188663 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-logs\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.188756 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-tls-certs\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.188824 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-combined-ca-bundle\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.202229 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bd57fd8f-854qq" event={"ID":"9df49f4c-07ec-4360-88da-765b936357ad","Type":"ContainerStarted","Data":"ca15dd4d00424e10b16c9315ace884ed75b2fbad9a42602661e866daf6703ced"} Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.224301 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-547899c658-2788v"] Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.226293 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.233165 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-547899c658-2788v"] Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.291459 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-tls-certs\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.291557 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-scripts\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292377 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-combined-ca-bundle\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97qhz\" (UniqueName: \"kubernetes.io/projected/81b52673-da5b-421f-be4c-d5608c8d82df-kube-api-access-97qhz\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292520 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-config-data\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292574 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-secret-key\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292657 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-combined-ca-bundle\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292704 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-config-data\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292824 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fs8j\" (UniqueName: \"kubernetes.io/projected/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-kube-api-access-8fs8j\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292858 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-secret-key\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292893 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-scripts\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292993 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-tls-certs\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.293045 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-logs\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.293143 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b52673-da5b-421f-be4c-d5608c8d82df-logs\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.293707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-logs\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.293778 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-scripts\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.294541 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-config-data\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.299169 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-secret-key\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.299170 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-combined-ca-bundle\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.313691 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fs8j\" (UniqueName: \"kubernetes.io/projected/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-kube-api-access-8fs8j\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.318255 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-tls-certs\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.394845 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-secret-key\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.394919 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-tls-certs\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.394970 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b52673-da5b-421f-be4c-d5608c8d82df-logs\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.395030 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-scripts\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.395064 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97qhz\" (UniqueName: \"kubernetes.io/projected/81b52673-da5b-421f-be4c-d5608c8d82df-kube-api-access-97qhz\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.395102 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-combined-ca-bundle\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.395122 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-config-data\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.395763 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b52673-da5b-421f-be4c-d5608c8d82df-logs\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.395950 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-scripts\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.396483 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-config-data\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.399830 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-combined-ca-bundle\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.400494 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-tls-certs\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.401547 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-secret-key\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.412319 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97qhz\" (UniqueName: \"kubernetes.io/projected/81b52673-da5b-421f-be4c-d5608c8d82df-kube-api-access-97qhz\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.436746 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.555760 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.912836 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d844c64f6-dltxp"] Feb 21 08:23:22 crc kubenswrapper[4820]: I0221 08:23:22.055340 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-547899c658-2788v"] Feb 21 08:23:22 crc kubenswrapper[4820]: I0221 08:23:22.212559 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d844c64f6-dltxp" event={"ID":"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9","Type":"ContainerStarted","Data":"b8979ed7b663edbb899b5b453daac362045e3fab6583881f796d8f5da1b726a5"} Feb 21 08:23:22 crc kubenswrapper[4820]: I0221 08:23:22.213975 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547899c658-2788v" event={"ID":"81b52673-da5b-421f-be4c-d5608c8d82df","Type":"ContainerStarted","Data":"eff2d04aa677852d296ff8fc2a98555932014b77b70e9d62fecd2afd6b553dbd"} Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.228348 4820 generic.go:334] "Generic (PLEG): container finished" podID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerID="8384371cb1cb59ce68f65650414ed9165b7cc3f363b2fda166fcb245381ffb64" exitCode=0 Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.228398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b012ae7-d786-413d-82ca-88448b64b4cd","Type":"ContainerDied","Data":"8384371cb1cb59ce68f65650414ed9165b7cc3f363b2fda166fcb245381ffb64"} Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.228762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b012ae7-d786-413d-82ca-88448b64b4cd","Type":"ContainerDied","Data":"4481d03455dfe7bfd51fa7956acf3da2923a2d64faccca3c25e6e25bb77ec5a9"} Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.228969 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4481d03455dfe7bfd51fa7956acf3da2923a2d64faccca3c25e6e25bb77ec5a9" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.232669 4820 generic.go:334] "Generic (PLEG): container finished" podID="57f780e9-b685-4b5b-bab3-63b31b794393" containerID="9c8352c44b67eda0f166f0687429790e5bd49b1d98c898e2089a6c9be067a4f4" exitCode=0 Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.232713 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57f780e9-b685-4b5b-bab3-63b31b794393","Type":"ContainerDied","Data":"9c8352c44b67eda0f166f0687429790e5bd49b1d98c898e2089a6c9be067a4f4"} Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.232740 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57f780e9-b685-4b5b-bab3-63b31b794393","Type":"ContainerDied","Data":"6cf72bcbf2a073ab72014714c13787a4273dbe3561b7424b9118c55987b585a1"} Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.232759 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cf72bcbf2a073ab72014714c13787a4273dbe3561b7424b9118c55987b585a1" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.233805 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.241819 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.335716 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tw8z\" (UniqueName: \"kubernetes.io/projected/57f780e9-b685-4b5b-bab3-63b31b794393-kube-api-access-4tw8z\") pod \"57f780e9-b685-4b5b-bab3-63b31b794393\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336130 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-config-data\") pod \"4b012ae7-d786-413d-82ca-88448b64b4cd\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336180 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-logs\") pod \"57f780e9-b685-4b5b-bab3-63b31b794393\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336318 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-httpd-run\") pod \"4b012ae7-d786-413d-82ca-88448b64b4cd\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336356 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-public-tls-certs\") pod \"4b012ae7-d786-413d-82ca-88448b64b4cd\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336384 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-logs\") pod \"4b012ae7-d786-413d-82ca-88448b64b4cd\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336430 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-httpd-run\") pod \"57f780e9-b685-4b5b-bab3-63b31b794393\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336481 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-combined-ca-bundle\") pod \"57f780e9-b685-4b5b-bab3-63b31b794393\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336506 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-internal-tls-certs\") pod \"57f780e9-b685-4b5b-bab3-63b31b794393\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336542 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-scripts\") pod \"4b012ae7-d786-413d-82ca-88448b64b4cd\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336551 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-logs" (OuterVolumeSpecName: "logs") pod "57f780e9-b685-4b5b-bab3-63b31b794393" (UID: "57f780e9-b685-4b5b-bab3-63b31b794393"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336584 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnczc\" (UniqueName: \"kubernetes.io/projected/4b012ae7-d786-413d-82ca-88448b64b4cd-kube-api-access-dnczc\") pod \"4b012ae7-d786-413d-82ca-88448b64b4cd\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336621 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-scripts\") pod \"57f780e9-b685-4b5b-bab3-63b31b794393\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336640 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-config-data\") pod \"57f780e9-b685-4b5b-bab3-63b31b794393\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336686 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-combined-ca-bundle\") pod \"4b012ae7-d786-413d-82ca-88448b64b4cd\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336788 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "57f780e9-b685-4b5b-bab3-63b31b794393" (UID: "57f780e9-b685-4b5b-bab3-63b31b794393"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.337278 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.337301 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.339384 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4b012ae7-d786-413d-82ca-88448b64b4cd" (UID: "4b012ae7-d786-413d-82ca-88448b64b4cd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.339453 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-logs" (OuterVolumeSpecName: "logs") pod "4b012ae7-d786-413d-82ca-88448b64b4cd" (UID: "4b012ae7-d786-413d-82ca-88448b64b4cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.342450 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f780e9-b685-4b5b-bab3-63b31b794393-kube-api-access-4tw8z" (OuterVolumeSpecName: "kube-api-access-4tw8z") pod "57f780e9-b685-4b5b-bab3-63b31b794393" (UID: "57f780e9-b685-4b5b-bab3-63b31b794393"). InnerVolumeSpecName "kube-api-access-4tw8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.342456 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-scripts" (OuterVolumeSpecName: "scripts") pod "57f780e9-b685-4b5b-bab3-63b31b794393" (UID: "57f780e9-b685-4b5b-bab3-63b31b794393"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.343397 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-scripts" (OuterVolumeSpecName: "scripts") pod "4b012ae7-d786-413d-82ca-88448b64b4cd" (UID: "4b012ae7-d786-413d-82ca-88448b64b4cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.348673 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b012ae7-d786-413d-82ca-88448b64b4cd-kube-api-access-dnczc" (OuterVolumeSpecName: "kube-api-access-dnczc") pod "4b012ae7-d786-413d-82ca-88448b64b4cd" (UID: "4b012ae7-d786-413d-82ca-88448b64b4cd"). InnerVolumeSpecName "kube-api-access-dnczc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.371938 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b012ae7-d786-413d-82ca-88448b64b4cd" (UID: "4b012ae7-d786-413d-82ca-88448b64b4cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.389009 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57f780e9-b685-4b5b-bab3-63b31b794393" (UID: "57f780e9-b685-4b5b-bab3-63b31b794393"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.417470 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4b012ae7-d786-413d-82ca-88448b64b4cd" (UID: "4b012ae7-d786-413d-82ca-88448b64b4cd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.424703 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-config-data" (OuterVolumeSpecName: "config-data") pod "4b012ae7-d786-413d-82ca-88448b64b4cd" (UID: "4b012ae7-d786-413d-82ca-88448b64b4cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.425511 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-config-data" (OuterVolumeSpecName: "config-data") pod "57f780e9-b685-4b5b-bab3-63b31b794393" (UID: "57f780e9-b685-4b5b-bab3-63b31b794393"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.431411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "57f780e9-b685-4b5b-bab3-63b31b794393" (UID: "57f780e9-b685-4b5b-bab3-63b31b794393"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438638 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438673 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438685 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438694 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438703 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438711 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438720 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnczc\" (UniqueName: \"kubernetes.io/projected/4b012ae7-d786-413d-82ca-88448b64b4cd-kube-api-access-dnczc\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438729 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438737 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438745 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438753 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tw8z\" (UniqueName: \"kubernetes.io/projected/57f780e9-b685-4b5b-bab3-63b31b794393-kube-api-access-4tw8z\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438761 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: E0221 08:23:23.863701 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57f780e9_b685_4b5b_bab3_63b31b794393.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57f780e9_b685_4b5b_bab3_63b31b794393.slice/crio-6cf72bcbf2a073ab72014714c13787a4273dbe3561b7424b9118c55987b585a1\": RecentStats: unable to find data in memory cache]" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.242511 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.242511 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.281146 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.310612 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.323832 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.339522 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.351551 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: E0221 08:23:24.352068 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-httpd" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352081 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-httpd" Feb 21 08:23:24 crc kubenswrapper[4820]: E0221 08:23:24.352103 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-log" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352109 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-log" Feb 21 08:23:24 crc kubenswrapper[4820]: E0221 08:23:24.352132 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-httpd" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352138 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-httpd" Feb 21 08:23:24 crc kubenswrapper[4820]: E0221 08:23:24.352146 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-log" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352151 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-log" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352324 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-log" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352364 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-httpd" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352376 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-httpd" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352397 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-log" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.353484 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.357130 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.361095 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.361973 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mrcwm" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.362212 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.365050 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.366769 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.370048 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.370412 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.378394 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.387536 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.467111 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9m6p\" (UniqueName: \"kubernetes.io/projected/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-kube-api-access-l9m6p\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.467203 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.467443 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.467739 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.467809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.467950 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468146 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468214 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b461284-e512-4b62-95ae-fc82b119c340-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468476 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468572 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468708 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b461284-e512-4b62-95ae-fc82b119c340-logs\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468745 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74254\" (UniqueName: \"kubernetes.io/projected/8b461284-e512-4b62-95ae-fc82b119c340-kube-api-access-74254\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468982 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-logs\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.573660 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74254\" (UniqueName: \"kubernetes.io/projected/8b461284-e512-4b62-95ae-fc82b119c340-kube-api-access-74254\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.573845 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-logs\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.573945 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9m6p\" (UniqueName: \"kubernetes.io/projected/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-kube-api-access-l9m6p\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574071 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574104 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574261 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574296 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574380 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574543 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b461284-e512-4b62-95ae-fc82b119c340-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574687 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574734 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574806 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b461284-e512-4b62-95ae-fc82b119c340-logs\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574837 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.576626 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b461284-e512-4b62-95ae-fc82b119c340-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.577356 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.584036 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.584357 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-logs\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.587583 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.591581 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.591738 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.596838 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.601073 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b461284-e512-4b62-95ae-fc82b119c340-logs\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.601568 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.605220 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.607107 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.608791 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74254\" (UniqueName: \"kubernetes.io/projected/8b461284-e512-4b62-95ae-fc82b119c340-kube-api-access-74254\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.615824 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9m6p\" (UniqueName: \"kubernetes.io/projected/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-kube-api-access-l9m6p\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.673116 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.697895 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:25 crc kubenswrapper[4820]: I0221 08:23:25.721368 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" path="/var/lib/kubelet/pods/4b012ae7-d786-413d-82ca-88448b64b4cd/volumes" Feb 21 08:23:25 crc kubenswrapper[4820]: I0221 08:23:25.728380 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" path="/var/lib/kubelet/pods/57f780e9-b685-4b5b-bab3-63b31b794393/volumes" Feb 21 08:23:29 crc kubenswrapper[4820]: I0221 08:23:29.195141 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:23:29 crc kubenswrapper[4820]: I0221 08:23:29.285864 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c","Type":"ContainerStarted","Data":"00ddbb706a21079bab997f8ef05130c3237c497c557a3f1f02ecb26f05fadb8f"} Feb 21 08:23:29 crc kubenswrapper[4820]: W0221 08:23:29.289886 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b461284_e512_4b62_95ae_fc82b119c340.slice/crio-2c472a320bafe05a2571a6d67326e541c78aef63d239415061079112cab58bd8 WatchSource:0}: Error finding container 2c472a320bafe05a2571a6d67326e541c78aef63d239415061079112cab58bd8: Status 404 returned error can't find the container with id 2c472a320bafe05a2571a6d67326e541c78aef63d239415061079112cab58bd8 Feb 21 08:23:29 crc kubenswrapper[4820]: I0221 08:23:29.290883 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:23:30 crc kubenswrapper[4820]: I0221 08:23:30.299269 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d844c64f6-dltxp" event={"ID":"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9","Type":"ContainerStarted","Data":"6a6ef780cb12a9051e7dc809f048f0ffcdc59aaa7e6d67885a1007a776e9e38b"} Feb 21 08:23:30 crc kubenswrapper[4820]: I0221 08:23:30.305551 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8b461284-e512-4b62-95ae-fc82b119c340","Type":"ContainerStarted","Data":"2c472a320bafe05a2571a6d67326e541c78aef63d239415061079112cab58bd8"} Feb 21 08:23:30 crc kubenswrapper[4820]: I0221 08:23:30.318678 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bd57fd8f-854qq" event={"ID":"9df49f4c-07ec-4360-88da-765b936357ad","Type":"ContainerStarted","Data":"51ddb0e9c8f79add878fda4cfb5205e85924ad2e3a1fcb138329d5b07f06aef4"} Feb 21 08:23:30 crc kubenswrapper[4820]: I0221 08:23:30.321614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c","Type":"ContainerStarted","Data":"50f4060c16dc9082b56c390f4dcec7673f57073afe563f4603d30d3bf17025e3"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.334843 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9bd7b79c-txbkn" event={"ID":"a2164709-4568-4dea-8421-e4d863e18ac3","Type":"ContainerStarted","Data":"02f7f206cc2706c22dcd8b2c2016f4fd218e9d1d76d465ddbb0f8a0b4070f3d3"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.336442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bd57fd8f-854qq" event={"ID":"9df49f4c-07ec-4360-88da-765b936357ad","Type":"ContainerStarted","Data":"7bfb356ca82f4b2916ff6a0b54de9d3657db7a429721701c2e2de36bac41f97d"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.336606 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66bd57fd8f-854qq" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon-log" containerID="cri-o://51ddb0e9c8f79add878fda4cfb5205e85924ad2e3a1fcb138329d5b07f06aef4" gracePeriod=30 Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.337214 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66bd57fd8f-854qq" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon" containerID="cri-o://7bfb356ca82f4b2916ff6a0b54de9d3657db7a429721701c2e2de36bac41f97d" gracePeriod=30 Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.340939 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c","Type":"ContainerStarted","Data":"f2e4f3d7795d49d65b123edda21972d6494394de2f38ab6da002d16ef4ad13ff"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.344052 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d844c64f6-dltxp" event={"ID":"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9","Type":"ContainerStarted","Data":"b7d83d8a0128ca9fb5d4d67e16ad5bccc9b6c3d4157fa7c9734060c3e64a0d5c"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.346101 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8b461284-e512-4b62-95ae-fc82b119c340","Type":"ContainerStarted","Data":"be887c463e233940a75b1c3b78d93de83c1fc45b946d6370c0a35684bae704d0"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.347751 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547899c658-2788v" event={"ID":"81b52673-da5b-421f-be4c-d5608c8d82df","Type":"ContainerStarted","Data":"75dd932712359b9c384bfe3ca353a892eb8c5cc411b34053a1addd1db3cfb25c"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.347801 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547899c658-2788v" event={"ID":"81b52673-da5b-421f-be4c-d5608c8d82df","Type":"ContainerStarted","Data":"31f0aa87caeedf0d07754a1c9bacdd7401160e05e285d884534c83101f67a23a"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.369797 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66bd57fd8f-854qq" podStartSLOduration=3.499159319 podStartE2EDuration="12.369778138s" podCreationTimestamp="2026-02-21 08:23:19 +0000 UTC" firstStartedPulling="2026-02-21 08:23:20.27128258 +0000 UTC m=+5775.304366778" lastFinishedPulling="2026-02-21 08:23:29.141901399 +0000 UTC m=+5784.174985597" observedRunningTime="2026-02-21 08:23:31.360771695 +0000 UTC m=+5786.393855893" watchObservedRunningTime="2026-02-21 08:23:31.369778138 +0000 UTC m=+5786.402862336" Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.396946 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-547899c658-2788v" podStartSLOduration=2.563259429 podStartE2EDuration="10.396921583s" podCreationTimestamp="2026-02-21 08:23:21 +0000 UTC" firstStartedPulling="2026-02-21 08:23:22.058832434 +0000 UTC m=+5777.091916632" lastFinishedPulling="2026-02-21 08:23:29.892494588 +0000 UTC m=+5784.925578786" observedRunningTime="2026-02-21 08:23:31.383933942 +0000 UTC m=+5786.417018170" watchObservedRunningTime="2026-02-21 08:23:31.396921583 +0000 UTC m=+5786.430005781" Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.411707 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.411681162 podStartE2EDuration="7.411681162s" podCreationTimestamp="2026-02-21 08:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:23:31.400322865 +0000 UTC m=+5786.433407063" watchObservedRunningTime="2026-02-21 08:23:31.411681162 +0000 UTC m=+5786.444765350" Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.438127 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-d844c64f6-dltxp" podStartSLOduration=3.210151731 podStartE2EDuration="10.438105587s" podCreationTimestamp="2026-02-21 08:23:21 +0000 UTC" firstStartedPulling="2026-02-21 08:23:21.92562327 +0000 UTC m=+5776.958707458" lastFinishedPulling="2026-02-21 08:23:29.153577116 +0000 UTC m=+5784.186661314" observedRunningTime="2026-02-21 08:23:31.428992871 +0000 UTC m=+5786.462077079" watchObservedRunningTime="2026-02-21 08:23:31.438105587 +0000 UTC m=+5786.471189785" Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.438446 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.438496 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.556909 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.556958 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:32 crc kubenswrapper[4820]: I0221 08:23:32.365354 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9bd7b79c-txbkn" event={"ID":"a2164709-4568-4dea-8421-e4d863e18ac3","Type":"ContainerStarted","Data":"6c3446c942e2e02a71e424bada50093bc63fb5c599afaea588dc3734e7910585"} Feb 21 08:23:32 crc kubenswrapper[4820]: I0221 08:23:32.365362 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f9bd7b79c-txbkn" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon-log" containerID="cri-o://02f7f206cc2706c22dcd8b2c2016f4fd218e9d1d76d465ddbb0f8a0b4070f3d3" gracePeriod=30 Feb 21 08:23:32 crc kubenswrapper[4820]: I0221 08:23:32.365615 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f9bd7b79c-txbkn" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon" containerID="cri-o://6c3446c942e2e02a71e424bada50093bc63fb5c599afaea588dc3734e7910585" gracePeriod=30 Feb 21 08:23:32 crc kubenswrapper[4820]: I0221 08:23:32.372917 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8b461284-e512-4b62-95ae-fc82b119c340","Type":"ContainerStarted","Data":"52fff291f2410225f8ebc1a06a3015dd791eec0f4b8ef8a5cf64bdab4897b97c"} Feb 21 08:23:32 crc kubenswrapper[4820]: I0221 08:23:32.447781 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f9bd7b79c-txbkn" podStartSLOduration=2.671937066 podStartE2EDuration="13.447762975s" podCreationTimestamp="2026-02-21 08:23:19 +0000 UTC" firstStartedPulling="2026-02-21 08:23:20.078858593 +0000 UTC m=+5775.111942791" lastFinishedPulling="2026-02-21 08:23:30.854684502 +0000 UTC m=+5785.887768700" observedRunningTime="2026-02-21 08:23:32.439315937 +0000 UTC m=+5787.472400145" watchObservedRunningTime="2026-02-21 08:23:32.447762975 +0000 UTC m=+5787.480847173" Feb 21 08:23:32 crc kubenswrapper[4820]: I0221 08:23:32.486273 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.486254207 podStartE2EDuration="8.486254207s" podCreationTimestamp="2026-02-21 08:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:23:32.481529259 +0000 UTC m=+5787.514613457" watchObservedRunningTime="2026-02-21 08:23:32.486254207 +0000 UTC m=+5787.519338405" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.673883 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.673935 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.698758 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.699120 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.709809 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.737172 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.738950 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.747177 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:35 crc kubenswrapper[4820]: I0221 08:23:35.399776 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 08:23:35 crc kubenswrapper[4820]: I0221 08:23:35.399813 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 08:23:35 crc kubenswrapper[4820]: I0221 08:23:35.399825 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:35 crc kubenswrapper[4820]: I0221 08:23:35.399839 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:37 crc kubenswrapper[4820]: I0221 08:23:37.508313 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:37 crc kubenswrapper[4820]: I0221 08:23:37.589835 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 08:23:37 crc kubenswrapper[4820]: I0221 08:23:37.590117 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 08:23:37 crc kubenswrapper[4820]: I0221 08:23:37.591181 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 08:23:38 crc kubenswrapper[4820]: I0221 08:23:38.352689 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:39 crc kubenswrapper[4820]: I0221 08:23:39.598738 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:39 crc kubenswrapper[4820]: I0221 08:23:39.770901 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:41 crc kubenswrapper[4820]: I0221 08:23:41.438596 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-d844c64f6-dltxp" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.102:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.102:8443: connect: connection refused" Feb 21 08:23:41 crc kubenswrapper[4820]: I0221 08:23:41.558834 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-547899c658-2788v" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.103:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.103:8443: connect: connection refused" Feb 21 08:23:43 crc kubenswrapper[4820]: I0221 08:23:43.816538 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:23:43 crc kubenswrapper[4820]: I0221 08:23:43.816831 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:23:43 crc kubenswrapper[4820]: I0221 08:23:43.816884 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:23:43 crc kubenswrapper[4820]: I0221 08:23:43.817683 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71790bda67cb32788b4b805eefed34727cfa5df22b69da5f6508ea1c43987bd6"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:23:43 crc kubenswrapper[4820]: I0221 08:23:43.817748 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://71790bda67cb32788b4b805eefed34727cfa5df22b69da5f6508ea1c43987bd6" gracePeriod=600 Feb 21 08:23:44 crc kubenswrapper[4820]: I0221 08:23:44.501753 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="71790bda67cb32788b4b805eefed34727cfa5df22b69da5f6508ea1c43987bd6" exitCode=0 Feb 21 08:23:44 crc kubenswrapper[4820]: I0221 08:23:44.501835 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"71790bda67cb32788b4b805eefed34727cfa5df22b69da5f6508ea1c43987bd6"} Feb 21 08:23:44 crc kubenswrapper[4820]: I0221 08:23:44.502148 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8"} Feb 21 08:23:44 crc kubenswrapper[4820]: I0221 08:23:44.502172 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:23:45 crc kubenswrapper[4820]: I0221 08:23:45.059875 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-w88hx"] Feb 21 08:23:45 crc kubenswrapper[4820]: I0221 08:23:45.071531 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-w88hx"] Feb 21 08:23:45 crc kubenswrapper[4820]: I0221 08:23:45.707716 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fea2a27-a57a-4827-8e17-5d19ef7bba28" path="/var/lib/kubelet/pods/0fea2a27-a57a-4827-8e17-5d19ef7bba28/volumes" Feb 21 08:23:46 crc kubenswrapper[4820]: I0221 08:23:46.030755 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5a31-account-create-update-p74qt"] Feb 21 08:23:46 crc kubenswrapper[4820]: I0221 08:23:46.040133 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5a31-account-create-update-p74qt"] Feb 21 08:23:47 crc kubenswrapper[4820]: I0221 08:23:47.709959 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4a61ba7-b697-4b33-8ed3-9dda50a2c415" path="/var/lib/kubelet/pods/d4a61ba7-b697-4b33-8ed3-9dda50a2c415/volumes" Feb 21 08:23:53 crc kubenswrapper[4820]: I0221 08:23:53.386621 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:53 crc kubenswrapper[4820]: I0221 08:23:53.547612 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:55 crc kubenswrapper[4820]: I0221 08:23:55.168043 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:55 crc kubenswrapper[4820]: I0221 08:23:55.298855 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:55 crc kubenswrapper[4820]: I0221 08:23:55.380923 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d844c64f6-dltxp"] Feb 21 08:23:55 crc kubenswrapper[4820]: I0221 08:23:55.595056 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-d844c64f6-dltxp" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon-log" containerID="cri-o://6a6ef780cb12a9051e7dc809f048f0ffcdc59aaa7e6d67885a1007a776e9e38b" gracePeriod=30 Feb 21 08:23:55 crc kubenswrapper[4820]: I0221 08:23:55.595096 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-d844c64f6-dltxp" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" containerID="cri-o://b7d83d8a0128ca9fb5d4d67e16ad5bccc9b6c3d4157fa7c9734060c3e64a0d5c" gracePeriod=30 Feb 21 08:23:56 crc kubenswrapper[4820]: I0221 08:23:56.039760 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-kncz4"] Feb 21 08:23:56 crc kubenswrapper[4820]: I0221 08:23:56.051146 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-kncz4"] Feb 21 08:23:57 crc kubenswrapper[4820]: I0221 08:23:57.707595 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2285cbc5-545d-463d-ae4a-350c3fd26323" path="/var/lib/kubelet/pods/2285cbc5-545d-463d-ae4a-350c3fd26323/volumes" Feb 21 08:23:59 crc kubenswrapper[4820]: I0221 08:23:59.628901 4820 generic.go:334] "Generic (PLEG): container finished" podID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerID="b7d83d8a0128ca9fb5d4d67e16ad5bccc9b6c3d4157fa7c9734060c3e64a0d5c" exitCode=0 Feb 21 08:23:59 crc kubenswrapper[4820]: I0221 08:23:59.629002 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d844c64f6-dltxp" event={"ID":"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9","Type":"ContainerDied","Data":"b7d83d8a0128ca9fb5d4d67e16ad5bccc9b6c3d4157fa7c9734060c3e64a0d5c"} Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.438138 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-d844c64f6-dltxp" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.102:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.102:8443: connect: connection refused" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.649955 4820 generic.go:334] "Generic (PLEG): container finished" podID="9df49f4c-07ec-4360-88da-765b936357ad" containerID="7bfb356ca82f4b2916ff6a0b54de9d3657db7a429721701c2e2de36bac41f97d" exitCode=137 Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.650279 4820 generic.go:334] "Generic (PLEG): container finished" podID="9df49f4c-07ec-4360-88da-765b936357ad" containerID="51ddb0e9c8f79add878fda4cfb5205e85924ad2e3a1fcb138329d5b07f06aef4" exitCode=137 Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.650043 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bd57fd8f-854qq" event={"ID":"9df49f4c-07ec-4360-88da-765b936357ad","Type":"ContainerDied","Data":"7bfb356ca82f4b2916ff6a0b54de9d3657db7a429721701c2e2de36bac41f97d"} Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.650322 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bd57fd8f-854qq" event={"ID":"9df49f4c-07ec-4360-88da-765b936357ad","Type":"ContainerDied","Data":"51ddb0e9c8f79add878fda4cfb5205e85924ad2e3a1fcb138329d5b07f06aef4"} Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.745811 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.856569 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-config-data\") pod \"9df49f4c-07ec-4360-88da-765b936357ad\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.856709 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df49f4c-07ec-4360-88da-765b936357ad-logs\") pod \"9df49f4c-07ec-4360-88da-765b936357ad\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.856741 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9df49f4c-07ec-4360-88da-765b936357ad-horizon-secret-key\") pod \"9df49f4c-07ec-4360-88da-765b936357ad\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.856781 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r484k\" (UniqueName: \"kubernetes.io/projected/9df49f4c-07ec-4360-88da-765b936357ad-kube-api-access-r484k\") pod \"9df49f4c-07ec-4360-88da-765b936357ad\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.856940 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-scripts\") pod \"9df49f4c-07ec-4360-88da-765b936357ad\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.860568 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df49f4c-07ec-4360-88da-765b936357ad-logs" (OuterVolumeSpecName: "logs") pod "9df49f4c-07ec-4360-88da-765b936357ad" (UID: "9df49f4c-07ec-4360-88da-765b936357ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.863936 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df49f4c-07ec-4360-88da-765b936357ad-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9df49f4c-07ec-4360-88da-765b936357ad" (UID: "9df49f4c-07ec-4360-88da-765b936357ad"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.864138 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df49f4c-07ec-4360-88da-765b936357ad-kube-api-access-r484k" (OuterVolumeSpecName: "kube-api-access-r484k") pod "9df49f4c-07ec-4360-88da-765b936357ad" (UID: "9df49f4c-07ec-4360-88da-765b936357ad"). InnerVolumeSpecName "kube-api-access-r484k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.883763 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-scripts" (OuterVolumeSpecName: "scripts") pod "9df49f4c-07ec-4360-88da-765b936357ad" (UID: "9df49f4c-07ec-4360-88da-765b936357ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.912638 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-config-data" (OuterVolumeSpecName: "config-data") pod "9df49f4c-07ec-4360-88da-765b936357ad" (UID: "9df49f4c-07ec-4360-88da-765b936357ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.960080 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.960125 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df49f4c-07ec-4360-88da-765b936357ad-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.960134 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9df49f4c-07ec-4360-88da-765b936357ad-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.960145 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r484k\" (UniqueName: \"kubernetes.io/projected/9df49f4c-07ec-4360-88da-765b936357ad-kube-api-access-r484k\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.960155 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.663001 4820 generic.go:334] "Generic (PLEG): container finished" podID="a2164709-4568-4dea-8421-e4d863e18ac3" containerID="6c3446c942e2e02a71e424bada50093bc63fb5c599afaea588dc3734e7910585" exitCode=137 Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.663551 4820 generic.go:334] "Generic (PLEG): container finished" podID="a2164709-4568-4dea-8421-e4d863e18ac3" containerID="02f7f206cc2706c22dcd8b2c2016f4fd218e9d1d76d465ddbb0f8a0b4070f3d3" exitCode=137 Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.663651 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9bd7b79c-txbkn" event={"ID":"a2164709-4568-4dea-8421-e4d863e18ac3","Type":"ContainerDied","Data":"6c3446c942e2e02a71e424bada50093bc63fb5c599afaea588dc3734e7910585"} Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.663726 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9bd7b79c-txbkn" event={"ID":"a2164709-4568-4dea-8421-e4d863e18ac3","Type":"ContainerDied","Data":"02f7f206cc2706c22dcd8b2c2016f4fd218e9d1d76d465ddbb0f8a0b4070f3d3"} Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.665867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bd57fd8f-854qq" event={"ID":"9df49f4c-07ec-4360-88da-765b936357ad","Type":"ContainerDied","Data":"ca15dd4d00424e10b16c9315ace884ed75b2fbad9a42602661e866daf6703ced"} Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.665908 4820 scope.go:117] "RemoveContainer" containerID="7bfb356ca82f4b2916ff6a0b54de9d3657db7a429721701c2e2de36bac41f97d" Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.666092 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.718822 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66bd57fd8f-854qq"] Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.729010 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66bd57fd8f-854qq"] Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.862418 4820 scope.go:117] "RemoveContainer" containerID="51ddb0e9c8f79add878fda4cfb5205e85924ad2e3a1fcb138329d5b07f06aef4" Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.954324 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.089859 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2164709-4568-4dea-8421-e4d863e18ac3-logs\") pod \"a2164709-4568-4dea-8421-e4d863e18ac3\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.090587 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-config-data\") pod \"a2164709-4568-4dea-8421-e4d863e18ac3\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.090654 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2164709-4568-4dea-8421-e4d863e18ac3-horizon-secret-key\") pod \"a2164709-4568-4dea-8421-e4d863e18ac3\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.090771 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-scripts\") pod \"a2164709-4568-4dea-8421-e4d863e18ac3\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.090803 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2164709-4568-4dea-8421-e4d863e18ac3-logs" (OuterVolumeSpecName: "logs") pod "a2164709-4568-4dea-8421-e4d863e18ac3" (UID: "a2164709-4568-4dea-8421-e4d863e18ac3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.090890 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx6n7\" (UniqueName: \"kubernetes.io/projected/a2164709-4568-4dea-8421-e4d863e18ac3-kube-api-access-dx6n7\") pod \"a2164709-4568-4dea-8421-e4d863e18ac3\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.092036 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2164709-4568-4dea-8421-e4d863e18ac3-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.095861 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2164709-4568-4dea-8421-e4d863e18ac3-kube-api-access-dx6n7" (OuterVolumeSpecName: "kube-api-access-dx6n7") pod "a2164709-4568-4dea-8421-e4d863e18ac3" (UID: "a2164709-4568-4dea-8421-e4d863e18ac3"). InnerVolumeSpecName "kube-api-access-dx6n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.114768 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2164709-4568-4dea-8421-e4d863e18ac3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a2164709-4568-4dea-8421-e4d863e18ac3" (UID: "a2164709-4568-4dea-8421-e4d863e18ac3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.115671 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-config-data" (OuterVolumeSpecName: "config-data") pod "a2164709-4568-4dea-8421-e4d863e18ac3" (UID: "a2164709-4568-4dea-8421-e4d863e18ac3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.118477 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-scripts" (OuterVolumeSpecName: "scripts") pod "a2164709-4568-4dea-8421-e4d863e18ac3" (UID: "a2164709-4568-4dea-8421-e4d863e18ac3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.194392 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2164709-4568-4dea-8421-e4d863e18ac3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.194430 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.194441 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx6n7\" (UniqueName: \"kubernetes.io/projected/a2164709-4568-4dea-8421-e4d863e18ac3-kube-api-access-dx6n7\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.194451 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.676707 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.676708 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9bd7b79c-txbkn" event={"ID":"a2164709-4568-4dea-8421-e4d863e18ac3","Type":"ContainerDied","Data":"4eb3448883c497758beeea4960713d6d2bc637bf465fa9c8ccfeb69d503fe899"} Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.677856 4820 scope.go:117] "RemoveContainer" containerID="6c3446c942e2e02a71e424bada50093bc63fb5c599afaea588dc3734e7910585" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.715886 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df49f4c-07ec-4360-88da-765b936357ad" path="/var/lib/kubelet/pods/9df49f4c-07ec-4360-88da-765b936357ad/volumes" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.716492 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f9bd7b79c-txbkn"] Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.722639 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f9bd7b79c-txbkn"] Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.824460 4820 scope.go:117] "RemoveContainer" containerID="02f7f206cc2706c22dcd8b2c2016f4fd218e9d1d76d465ddbb0f8a0b4070f3d3" Feb 21 08:24:05 crc kubenswrapper[4820]: I0221 08:24:05.714312 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" path="/var/lib/kubelet/pods/a2164709-4568-4dea-8421-e4d863e18ac3/volumes" Feb 21 08:24:11 crc kubenswrapper[4820]: I0221 08:24:11.437690 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-d844c64f6-dltxp" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.102:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.102:8443: connect: connection refused" Feb 21 08:24:21 crc kubenswrapper[4820]: I0221 08:24:21.437547 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-d844c64f6-dltxp" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.102:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.102:8443: connect: connection refused" Feb 21 08:24:21 crc kubenswrapper[4820]: I0221 08:24:21.438198 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:24:25 crc kubenswrapper[4820]: I0221 08:24:25.882073 4820 generic.go:334] "Generic (PLEG): container finished" podID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerID="6a6ef780cb12a9051e7dc809f048f0ffcdc59aaa7e6d67885a1007a776e9e38b" exitCode=137 Feb 21 08:24:25 crc kubenswrapper[4820]: I0221 08:24:25.882163 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d844c64f6-dltxp" event={"ID":"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9","Type":"ContainerDied","Data":"6a6ef780cb12a9051e7dc809f048f0ffcdc59aaa7e6d67885a1007a776e9e38b"} Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.025877 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.135055 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-logs\") pod \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.135120 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-combined-ca-bundle\") pod \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.135330 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-scripts\") pod \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.135353 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-tls-certs\") pod \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.135472 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-config-data\") pod \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.135509 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-secret-key\") pod \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.135547 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fs8j\" (UniqueName: \"kubernetes.io/projected/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-kube-api-access-8fs8j\") pod \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.136119 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-logs" (OuterVolumeSpecName: "logs") pod "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" (UID: "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.140413 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" (UID: "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.147996 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-kube-api-access-8fs8j" (OuterVolumeSpecName: "kube-api-access-8fs8j") pod "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" (UID: "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9"). InnerVolumeSpecName "kube-api-access-8fs8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.158321 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-config-data" (OuterVolumeSpecName: "config-data") pod "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" (UID: "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.158576 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-scripts" (OuterVolumeSpecName: "scripts") pod "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" (UID: "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.162727 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" (UID: "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.180825 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" (UID: "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.238117 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.238840 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.238908 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.238963 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.239135 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fs8j\" (UniqueName: \"kubernetes.io/projected/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-kube-api-access-8fs8j\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.239203 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.239305 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.892037 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d844c64f6-dltxp" event={"ID":"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9","Type":"ContainerDied","Data":"b8979ed7b663edbb899b5b453daac362045e3fab6583881f796d8f5da1b726a5"} Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.892100 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.892103 4820 scope.go:117] "RemoveContainer" containerID="b7d83d8a0128ca9fb5d4d67e16ad5bccc9b6c3d4157fa7c9734060c3e64a0d5c" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.930982 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d844c64f6-dltxp"] Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.939448 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-d844c64f6-dltxp"] Feb 21 08:24:27 crc kubenswrapper[4820]: I0221 08:24:27.049028 4820 scope.go:117] "RemoveContainer" containerID="6a6ef780cb12a9051e7dc809f048f0ffcdc59aaa7e6d67885a1007a776e9e38b" Feb 21 08:24:27 crc kubenswrapper[4820]: I0221 08:24:27.707477 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" path="/var/lib/kubelet/pods/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9/volumes" Feb 21 08:24:28 crc kubenswrapper[4820]: I0221 08:24:28.746899 4820 scope.go:117] "RemoveContainer" containerID="135e969cc483fae03c701729ed4ef0eebb1f47660c935ededa411b6c1ad4f1b4" Feb 21 08:24:28 crc kubenswrapper[4820]: I0221 08:24:28.804338 4820 scope.go:117] "RemoveContainer" containerID="501babb59c40b46545eba4aa654f940bb7c87c7e466ae9ff90824f2b1d71dea7" Feb 21 08:24:28 crc kubenswrapper[4820]: I0221 08:24:28.822161 4820 scope.go:117] "RemoveContainer" containerID="2a26fe99fe0c30f653a1d68961945cc0a0de3158933b3e891813aa05adae4ac5" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.290993 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5879b888bd-q5njq"] Feb 21 08:24:36 crc kubenswrapper[4820]: E0221 08:24:36.292017 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292036 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: E0221 08:24:36.292061 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292069 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: E0221 08:24:36.292096 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292105 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: E0221 08:24:36.292126 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292135 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: E0221 08:24:36.292147 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292154 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: E0221 08:24:36.292166 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292174 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292418 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292436 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292461 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292472 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292482 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292502 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.293725 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.319021 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5879b888bd-q5njq"] Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.426970 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-horizon-secret-key\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.427128 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-combined-ca-bundle\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.427175 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-horizon-tls-certs\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.427433 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgl9h\" (UniqueName: \"kubernetes.io/projected/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-kube-api-access-zgl9h\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.427562 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-logs\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.427595 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-config-data\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.427682 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-scripts\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.529486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-horizon-tls-certs\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.529693 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgl9h\" (UniqueName: \"kubernetes.io/projected/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-kube-api-access-zgl9h\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.529747 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-logs\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.529773 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-config-data\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.529807 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-scripts\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.529851 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-horizon-secret-key\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.529938 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-combined-ca-bundle\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.530349 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-logs\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.530799 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-scripts\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.531071 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-config-data\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.536646 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-combined-ca-bundle\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.536962 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-horizon-secret-key\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.539955 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-horizon-tls-certs\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.549409 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgl9h\" (UniqueName: \"kubernetes.io/projected/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-kube-api-access-zgl9h\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.624363 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.151978 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5879b888bd-q5njq"] Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.824051 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-s4h7q"] Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.825471 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.856983 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-s4h7q"] Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.969386 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69513ef-06f3-4770-9e89-5b7b7fe873b2-operator-scripts\") pod \"heat-db-create-s4h7q\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.969447 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v4kw\" (UniqueName: \"kubernetes.io/projected/d69513ef-06f3-4770-9e89-5b7b7fe873b2-kube-api-access-5v4kw\") pod \"heat-db-create-s4h7q\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.972324 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-029a-account-create-update-bm98m"] Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.973664 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.978495 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.988650 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-029a-account-create-update-bm98m"] Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.018250 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5879b888bd-q5njq" event={"ID":"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6","Type":"ContainerStarted","Data":"81f3f666ea55d7ac8ed97b5b8c94c5056cd164cac1b5985c3ae74c4ae1db9fb3"} Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.018326 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5879b888bd-q5njq" event={"ID":"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6","Type":"ContainerStarted","Data":"cb0ce49677d9242b28528d708c0088fbec55e49300542c5fc31e60fdd3adf149"} Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.018341 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5879b888bd-q5njq" event={"ID":"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6","Type":"ContainerStarted","Data":"95dbf117f220fe71931283c3aedab5204239edd50b563006819dc9a1c8df5cac"} Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.050996 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5879b888bd-q5njq" podStartSLOduration=2.050972845 podStartE2EDuration="2.050972845s" podCreationTimestamp="2026-02-21 08:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:24:38.045452526 +0000 UTC m=+5853.078536724" watchObservedRunningTime="2026-02-21 08:24:38.050972845 +0000 UTC m=+5853.084057043" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.071561 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84358593-717e-4372-b9bb-28a34fb65b6e-operator-scripts\") pod \"heat-029a-account-create-update-bm98m\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.071659 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r89jm\" (UniqueName: \"kubernetes.io/projected/84358593-717e-4372-b9bb-28a34fb65b6e-kube-api-access-r89jm\") pod \"heat-029a-account-create-update-bm98m\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.071792 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69513ef-06f3-4770-9e89-5b7b7fe873b2-operator-scripts\") pod \"heat-db-create-s4h7q\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.071869 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v4kw\" (UniqueName: \"kubernetes.io/projected/d69513ef-06f3-4770-9e89-5b7b7fe873b2-kube-api-access-5v4kw\") pod \"heat-db-create-s4h7q\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.073123 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69513ef-06f3-4770-9e89-5b7b7fe873b2-operator-scripts\") pod \"heat-db-create-s4h7q\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.091546 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v4kw\" (UniqueName: \"kubernetes.io/projected/d69513ef-06f3-4770-9e89-5b7b7fe873b2-kube-api-access-5v4kw\") pod \"heat-db-create-s4h7q\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.144293 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.175285 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84358593-717e-4372-b9bb-28a34fb65b6e-operator-scripts\") pod \"heat-029a-account-create-update-bm98m\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.175425 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r89jm\" (UniqueName: \"kubernetes.io/projected/84358593-717e-4372-b9bb-28a34fb65b6e-kube-api-access-r89jm\") pod \"heat-029a-account-create-update-bm98m\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.176460 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84358593-717e-4372-b9bb-28a34fb65b6e-operator-scripts\") pod \"heat-029a-account-create-update-bm98m\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.200719 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r89jm\" (UniqueName: \"kubernetes.io/projected/84358593-717e-4372-b9bb-28a34fb65b6e-kube-api-access-r89jm\") pod \"heat-029a-account-create-update-bm98m\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.301651 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.642586 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-s4h7q"] Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.797691 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-029a-account-create-update-bm98m"] Feb 21 08:24:39 crc kubenswrapper[4820]: I0221 08:24:39.172921 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-029a-account-create-update-bm98m" event={"ID":"84358593-717e-4372-b9bb-28a34fb65b6e","Type":"ContainerStarted","Data":"165c9d470f67a6f385fa18fa3a35a82b41d39fb4f9053462a397aab1c8171341"} Feb 21 08:24:39 crc kubenswrapper[4820]: I0221 08:24:39.174772 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-s4h7q" event={"ID":"d69513ef-06f3-4770-9e89-5b7b7fe873b2","Type":"ContainerStarted","Data":"09e62968a3a56c593b47a9085a8e6e2071ff114c2072c4d813952330f89c8396"} Feb 21 08:24:40 crc kubenswrapper[4820]: I0221 08:24:40.184606 4820 generic.go:334] "Generic (PLEG): container finished" podID="84358593-717e-4372-b9bb-28a34fb65b6e" containerID="68774d2f4de18b7806f40ee1b0b156252a789383fdca19150a9a891e3ca19dd7" exitCode=0 Feb 21 08:24:40 crc kubenswrapper[4820]: I0221 08:24:40.184780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-029a-account-create-update-bm98m" event={"ID":"84358593-717e-4372-b9bb-28a34fb65b6e","Type":"ContainerDied","Data":"68774d2f4de18b7806f40ee1b0b156252a789383fdca19150a9a891e3ca19dd7"} Feb 21 08:24:40 crc kubenswrapper[4820]: I0221 08:24:40.187047 4820 generic.go:334] "Generic (PLEG): container finished" podID="d69513ef-06f3-4770-9e89-5b7b7fe873b2" containerID="a12df1c2f01a52b23e3ee09bfc109790a329f88bd152cdf89529c2311ee4b560" exitCode=0 Feb 21 08:24:40 crc kubenswrapper[4820]: I0221 08:24:40.187082 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-s4h7q" event={"ID":"d69513ef-06f3-4770-9e89-5b7b7fe873b2","Type":"ContainerDied","Data":"a12df1c2f01a52b23e3ee09bfc109790a329f88bd152cdf89529c2311ee4b560"} Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.579090 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.584724 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.763775 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69513ef-06f3-4770-9e89-5b7b7fe873b2-operator-scripts\") pod \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.763841 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v4kw\" (UniqueName: \"kubernetes.io/projected/d69513ef-06f3-4770-9e89-5b7b7fe873b2-kube-api-access-5v4kw\") pod \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.763866 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84358593-717e-4372-b9bb-28a34fb65b6e-operator-scripts\") pod \"84358593-717e-4372-b9bb-28a34fb65b6e\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.764278 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r89jm\" (UniqueName: \"kubernetes.io/projected/84358593-717e-4372-b9bb-28a34fb65b6e-kube-api-access-r89jm\") pod \"84358593-717e-4372-b9bb-28a34fb65b6e\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.764442 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84358593-717e-4372-b9bb-28a34fb65b6e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84358593-717e-4372-b9bb-28a34fb65b6e" (UID: "84358593-717e-4372-b9bb-28a34fb65b6e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.764444 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d69513ef-06f3-4770-9e89-5b7b7fe873b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d69513ef-06f3-4770-9e89-5b7b7fe873b2" (UID: "d69513ef-06f3-4770-9e89-5b7b7fe873b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.765719 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69513ef-06f3-4770-9e89-5b7b7fe873b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.765837 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84358593-717e-4372-b9bb-28a34fb65b6e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.770856 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84358593-717e-4372-b9bb-28a34fb65b6e-kube-api-access-r89jm" (OuterVolumeSpecName: "kube-api-access-r89jm") pod "84358593-717e-4372-b9bb-28a34fb65b6e" (UID: "84358593-717e-4372-b9bb-28a34fb65b6e"). InnerVolumeSpecName "kube-api-access-r89jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.775016 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69513ef-06f3-4770-9e89-5b7b7fe873b2-kube-api-access-5v4kw" (OuterVolumeSpecName: "kube-api-access-5v4kw") pod "d69513ef-06f3-4770-9e89-5b7b7fe873b2" (UID: "d69513ef-06f3-4770-9e89-5b7b7fe873b2"). InnerVolumeSpecName "kube-api-access-5v4kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.867332 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r89jm\" (UniqueName: \"kubernetes.io/projected/84358593-717e-4372-b9bb-28a34fb65b6e-kube-api-access-r89jm\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.867371 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v4kw\" (UniqueName: \"kubernetes.io/projected/d69513ef-06f3-4770-9e89-5b7b7fe873b2-kube-api-access-5v4kw\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:42 crc kubenswrapper[4820]: I0221 08:24:42.226426 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-029a-account-create-update-bm98m" event={"ID":"84358593-717e-4372-b9bb-28a34fb65b6e","Type":"ContainerDied","Data":"165c9d470f67a6f385fa18fa3a35a82b41d39fb4f9053462a397aab1c8171341"} Feb 21 08:24:42 crc kubenswrapper[4820]: I0221 08:24:42.226465 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="165c9d470f67a6f385fa18fa3a35a82b41d39fb4f9053462a397aab1c8171341" Feb 21 08:24:42 crc kubenswrapper[4820]: I0221 08:24:42.226515 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:42 crc kubenswrapper[4820]: I0221 08:24:42.232617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-s4h7q" event={"ID":"d69513ef-06f3-4770-9e89-5b7b7fe873b2","Type":"ContainerDied","Data":"09e62968a3a56c593b47a9085a8e6e2071ff114c2072c4d813952330f89c8396"} Feb 21 08:24:42 crc kubenswrapper[4820]: I0221 08:24:42.232662 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09e62968a3a56c593b47a9085a8e6e2071ff114c2072c4d813952330f89c8396" Feb 21 08:24:42 crc kubenswrapper[4820]: I0221 08:24:42.232710 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.004803 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-27sgb"] Feb 21 08:24:43 crc kubenswrapper[4820]: E0221 08:24:43.005295 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69513ef-06f3-4770-9e89-5b7b7fe873b2" containerName="mariadb-database-create" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.005316 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69513ef-06f3-4770-9e89-5b7b7fe873b2" containerName="mariadb-database-create" Feb 21 08:24:43 crc kubenswrapper[4820]: E0221 08:24:43.005647 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84358593-717e-4372-b9bb-28a34fb65b6e" containerName="mariadb-account-create-update" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.005656 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="84358593-717e-4372-b9bb-28a34fb65b6e" containerName="mariadb-account-create-update" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.005865 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="84358593-717e-4372-b9bb-28a34fb65b6e" containerName="mariadb-account-create-update" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.005883 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69513ef-06f3-4770-9e89-5b7b7fe873b2" containerName="mariadb-database-create" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.006666 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.008608 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.009814 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-b8zxk" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.015992 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-27sgb"] Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.198787 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-config-data\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.198887 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp5m2\" (UniqueName: \"kubernetes.io/projected/898015a2-3ff9-4c61-b164-4a6961c44884-kube-api-access-bp5m2\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.199006 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-combined-ca-bundle\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.301011 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-config-data\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.301074 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp5m2\" (UniqueName: \"kubernetes.io/projected/898015a2-3ff9-4c61-b164-4a6961c44884-kube-api-access-bp5m2\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.301158 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-combined-ca-bundle\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.308164 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-combined-ca-bundle\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.308309 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-config-data\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.320722 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp5m2\" (UniqueName: \"kubernetes.io/projected/898015a2-3ff9-4c61-b164-4a6961c44884-kube-api-access-bp5m2\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.331854 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.855602 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-27sgb"] Feb 21 08:24:44 crc kubenswrapper[4820]: I0221 08:24:44.252299 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-27sgb" event={"ID":"898015a2-3ff9-4c61-b164-4a6961c44884","Type":"ContainerStarted","Data":"ae15f4433fafcf52b94ff7e1aa91cfc93f28bae13b9100d34ef984ada754ff6a"} Feb 21 08:24:46 crc kubenswrapper[4820]: I0221 08:24:46.625336 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:46 crc kubenswrapper[4820]: I0221 08:24:46.625869 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:49 crc kubenswrapper[4820]: I0221 08:24:49.050854 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3bef-account-create-update-7n4bl"] Feb 21 08:24:49 crc kubenswrapper[4820]: I0221 08:24:49.060707 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3bef-account-create-update-7n4bl"] Feb 21 08:24:49 crc kubenswrapper[4820]: I0221 08:24:49.069299 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jzbnq"] Feb 21 08:24:49 crc kubenswrapper[4820]: I0221 08:24:49.078382 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jzbnq"] Feb 21 08:24:49 crc kubenswrapper[4820]: I0221 08:24:49.710647 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0e7c5f-32ab-470c-a8eb-b0067af1ce22" path="/var/lib/kubelet/pods/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22/volumes" Feb 21 08:24:49 crc kubenswrapper[4820]: I0221 08:24:49.711372 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80901dca-016d-4c52-b87d-f953b0689f1a" path="/var/lib/kubelet/pods/80901dca-016d-4c52-b87d-f953b0689f1a/volumes" Feb 21 08:24:54 crc kubenswrapper[4820]: I0221 08:24:54.457548 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-27sgb" event={"ID":"898015a2-3ff9-4c61-b164-4a6961c44884","Type":"ContainerStarted","Data":"14af9ba959135f7ccb7c53b58530a4f859881a49edc0cec93b0e45e191a3c245"} Feb 21 08:24:54 crc kubenswrapper[4820]: I0221 08:24:54.483890 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-27sgb" podStartSLOduration=2.306692934 podStartE2EDuration="12.483865766s" podCreationTimestamp="2026-02-21 08:24:42 +0000 UTC" firstStartedPulling="2026-02-21 08:24:43.861838868 +0000 UTC m=+5858.894923066" lastFinishedPulling="2026-02-21 08:24:54.0390117 +0000 UTC m=+5869.072095898" observedRunningTime="2026-02-21 08:24:54.478905922 +0000 UTC m=+5869.511990130" watchObservedRunningTime="2026-02-21 08:24:54.483865766 +0000 UTC m=+5869.516949964" Feb 21 08:24:56 crc kubenswrapper[4820]: I0221 08:24:56.477750 4820 generic.go:334] "Generic (PLEG): container finished" podID="898015a2-3ff9-4c61-b164-4a6961c44884" containerID="14af9ba959135f7ccb7c53b58530a4f859881a49edc0cec93b0e45e191a3c245" exitCode=0 Feb 21 08:24:56 crc kubenswrapper[4820]: I0221 08:24:56.478394 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-27sgb" event={"ID":"898015a2-3ff9-4c61-b164-4a6961c44884","Type":"ContainerDied","Data":"14af9ba959135f7ccb7c53b58530a4f859881a49edc0cec93b0e45e191a3c245"} Feb 21 08:24:56 crc kubenswrapper[4820]: I0221 08:24:56.629882 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5879b888bd-q5njq" podUID="d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.106:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.106:8443: connect: connection refused" Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.031560 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6768b"] Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.041438 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6768b"] Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.710622 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46c29c61-83db-423e-8e56-52c1637985e2" path="/var/lib/kubelet/pods/46c29c61-83db-423e-8e56-52c1637985e2/volumes" Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.812610 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.903646 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-config-data\") pod \"898015a2-3ff9-4c61-b164-4a6961c44884\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.903920 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-combined-ca-bundle\") pod \"898015a2-3ff9-4c61-b164-4a6961c44884\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.903987 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp5m2\" (UniqueName: \"kubernetes.io/projected/898015a2-3ff9-4c61-b164-4a6961c44884-kube-api-access-bp5m2\") pod \"898015a2-3ff9-4c61-b164-4a6961c44884\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.909599 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/898015a2-3ff9-4c61-b164-4a6961c44884-kube-api-access-bp5m2" (OuterVolumeSpecName: "kube-api-access-bp5m2") pod "898015a2-3ff9-4c61-b164-4a6961c44884" (UID: "898015a2-3ff9-4c61-b164-4a6961c44884"). InnerVolumeSpecName "kube-api-access-bp5m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.933150 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "898015a2-3ff9-4c61-b164-4a6961c44884" (UID: "898015a2-3ff9-4c61-b164-4a6961c44884"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.974610 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-config-data" (OuterVolumeSpecName: "config-data") pod "898015a2-3ff9-4c61-b164-4a6961c44884" (UID: "898015a2-3ff9-4c61-b164-4a6961c44884"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:24:58 crc kubenswrapper[4820]: I0221 08:24:58.006773 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:58 crc kubenswrapper[4820]: I0221 08:24:58.006814 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:58 crc kubenswrapper[4820]: I0221 08:24:58.006825 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp5m2\" (UniqueName: \"kubernetes.io/projected/898015a2-3ff9-4c61-b164-4a6961c44884-kube-api-access-bp5m2\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:58 crc kubenswrapper[4820]: I0221 08:24:58.494953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-27sgb" event={"ID":"898015a2-3ff9-4c61-b164-4a6961c44884","Type":"ContainerDied","Data":"ae15f4433fafcf52b94ff7e1aa91cfc93f28bae13b9100d34ef984ada754ff6a"} Feb 21 08:24:58 crc kubenswrapper[4820]: I0221 08:24:58.494991 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae15f4433fafcf52b94ff7e1aa91cfc93f28bae13b9100d34ef984ada754ff6a" Feb 21 08:24:58 crc kubenswrapper[4820]: I0221 08:24:58.495101 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.460790 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-67969b55f7-j9b9h"] Feb 21 08:24:59 crc kubenswrapper[4820]: E0221 08:24:59.461317 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="898015a2-3ff9-4c61-b164-4a6961c44884" containerName="heat-db-sync" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.461334 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="898015a2-3ff9-4c61-b164-4a6961c44884" containerName="heat-db-sync" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.461539 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="898015a2-3ff9-4c61-b164-4a6961c44884" containerName="heat-db-sync" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.462343 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.465439 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.465749 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.466041 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-b8zxk" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.490899 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-67969b55f7-j9b9h"] Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.535374 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9gdb\" (UniqueName: \"kubernetes.io/projected/5d104918-3b6f-4543-9ca3-0ae595be78a2-kube-api-access-s9gdb\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.535453 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data-custom\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.535710 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-combined-ca-bundle\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.536254 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.562979 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6c98bf9957-8ctvl"] Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.564613 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.569806 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.576795 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6c98bf9957-8ctvl"] Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.639855 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-combined-ca-bundle\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.639954 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.640006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.640061 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-combined-ca-bundle\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.640974 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data-custom\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.641056 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9gdb\" (UniqueName: \"kubernetes.io/projected/5d104918-3b6f-4543-9ca3-0ae595be78a2-kube-api-access-s9gdb\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.641190 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data-custom\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.641279 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cj8b\" (UniqueName: \"kubernetes.io/projected/626f6f5d-6222-406e-a687-92b74b1c9def-kube-api-access-5cj8b\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.647204 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data-custom\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.648383 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.649521 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-78db975b86-shg8k"] Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.651862 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.660932 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.662939 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-combined-ca-bundle\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.674767 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9gdb\" (UniqueName: \"kubernetes.io/projected/5d104918-3b6f-4543-9ca3-0ae595be78a2-kube-api-access-s9gdb\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.715638 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78db975b86-shg8k"] Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.743589 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cj8b\" (UniqueName: \"kubernetes.io/projected/626f6f5d-6222-406e-a687-92b74b1c9def-kube-api-access-5cj8b\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.744158 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-combined-ca-bundle\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.744650 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llnrx\" (UniqueName: \"kubernetes.io/projected/6eba12fb-3cb7-4830-9722-9c9e6ab46002-kube-api-access-llnrx\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.745508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.745579 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.745628 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data-custom\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.745767 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-combined-ca-bundle\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.746384 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data-custom\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.751513 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.753211 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data-custom\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.757530 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-combined-ca-bundle\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.760925 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cj8b\" (UniqueName: \"kubernetes.io/projected/626f6f5d-6222-406e-a687-92b74b1c9def-kube-api-access-5cj8b\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.781690 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.848138 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-combined-ca-bundle\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.848205 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llnrx\" (UniqueName: \"kubernetes.io/projected/6eba12fb-3cb7-4830-9722-9c9e6ab46002-kube-api-access-llnrx\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.848289 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.848324 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data-custom\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.856388 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-combined-ca-bundle\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.860030 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data-custom\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.860815 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.868610 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llnrx\" (UniqueName: \"kubernetes.io/projected/6eba12fb-3cb7-4830-9722-9c9e6ab46002-kube-api-access-llnrx\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.887893 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.925794 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:25:00 crc kubenswrapper[4820]: I0221 08:25:00.319738 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-67969b55f7-j9b9h"] Feb 21 08:25:00 crc kubenswrapper[4820]: W0221 08:25:00.323191 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d104918_3b6f_4543_9ca3_0ae595be78a2.slice/crio-dbf92b41a5a660ca8950c0f9f25b1cc977262a38e0d285fc433fc069c9b1436e WatchSource:0}: Error finding container dbf92b41a5a660ca8950c0f9f25b1cc977262a38e0d285fc433fc069c9b1436e: Status 404 returned error can't find the container with id dbf92b41a5a660ca8950c0f9f25b1cc977262a38e0d285fc433fc069c9b1436e Feb 21 08:25:00 crc kubenswrapper[4820]: I0221 08:25:00.442005 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6c98bf9957-8ctvl"] Feb 21 08:25:00 crc kubenswrapper[4820]: W0221 08:25:00.452410 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod626f6f5d_6222_406e_a687_92b74b1c9def.slice/crio-580727d40e4f676d3cfe0c8263cb0f5ca6fb6c011c3df97d62e9d7b25b57368c WatchSource:0}: Error finding container 580727d40e4f676d3cfe0c8263cb0f5ca6fb6c011c3df97d62e9d7b25b57368c: Status 404 returned error can't find the container with id 580727d40e4f676d3cfe0c8263cb0f5ca6fb6c011c3df97d62e9d7b25b57368c Feb 21 08:25:00 crc kubenswrapper[4820]: I0221 08:25:00.556547 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-67969b55f7-j9b9h" event={"ID":"5d104918-3b6f-4543-9ca3-0ae595be78a2","Type":"ContainerStarted","Data":"dbf92b41a5a660ca8950c0f9f25b1cc977262a38e0d285fc433fc069c9b1436e"} Feb 21 08:25:00 crc kubenswrapper[4820]: I0221 08:25:00.558286 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" event={"ID":"626f6f5d-6222-406e-a687-92b74b1c9def","Type":"ContainerStarted","Data":"580727d40e4f676d3cfe0c8263cb0f5ca6fb6c011c3df97d62e9d7b25b57368c"} Feb 21 08:25:00 crc kubenswrapper[4820]: I0221 08:25:00.592730 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78db975b86-shg8k"] Feb 21 08:25:00 crc kubenswrapper[4820]: W0221 08:25:00.593783 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6eba12fb_3cb7_4830_9722_9c9e6ab46002.slice/crio-3a9dfdc3820a237d6264d844940fa8cdbf49d936716d3113044384c4c3c6c2ab WatchSource:0}: Error finding container 3a9dfdc3820a237d6264d844940fa8cdbf49d936716d3113044384c4c3c6c2ab: Status 404 returned error can't find the container with id 3a9dfdc3820a237d6264d844940fa8cdbf49d936716d3113044384c4c3c6c2ab Feb 21 08:25:01 crc kubenswrapper[4820]: I0221 08:25:01.573729 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78db975b86-shg8k" event={"ID":"6eba12fb-3cb7-4830-9722-9c9e6ab46002","Type":"ContainerStarted","Data":"3a9dfdc3820a237d6264d844940fa8cdbf49d936716d3113044384c4c3c6c2ab"} Feb 21 08:25:01 crc kubenswrapper[4820]: I0221 08:25:01.575692 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-67969b55f7-j9b9h" event={"ID":"5d104918-3b6f-4543-9ca3-0ae595be78a2","Type":"ContainerStarted","Data":"44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5"} Feb 21 08:25:01 crc kubenswrapper[4820]: I0221 08:25:01.575791 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:25:01 crc kubenswrapper[4820]: I0221 08:25:01.596924 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-67969b55f7-j9b9h" podStartSLOduration=2.596904532 podStartE2EDuration="2.596904532s" podCreationTimestamp="2026-02-21 08:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:25:01.594917928 +0000 UTC m=+5876.628002126" watchObservedRunningTime="2026-02-21 08:25:01.596904532 +0000 UTC m=+5876.629988730" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.589308 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7fbc8dc6-rvrvw"] Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.591397 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.600986 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7fbc8dc6-rvrvw"] Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.640326 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" event={"ID":"626f6f5d-6222-406e-a687-92b74b1c9def","Type":"ContainerStarted","Data":"bf999cf0087b347dc47a32148fa88821e0546ffc0fc80fd764755c63b7d8a115"} Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.640745 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.655323 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7c5647759f-rwklt"] Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.656403 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.682884 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-config-data-custom\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.682967 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-combined-ca-bundle\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.683029 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-config-data\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.683071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spnnl\" (UniqueName: \"kubernetes.io/projected/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-kube-api-access-spnnl\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.703416 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" podStartSLOduration=2.182800497 podStartE2EDuration="7.703390707s" podCreationTimestamp="2026-02-21 08:24:59 +0000 UTC" firstStartedPulling="2026-02-21 08:25:00.455752256 +0000 UTC m=+5875.488836444" lastFinishedPulling="2026-02-21 08:25:05.976342456 +0000 UTC m=+5881.009426654" observedRunningTime="2026-02-21 08:25:06.687808055 +0000 UTC m=+5881.720892253" watchObservedRunningTime="2026-02-21 08:25:06.703390707 +0000 UTC m=+5881.736474905" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.704914 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7c5647759f-rwklt"] Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.754212 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-79f7554c5d-dxxm7"] Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.757214 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.761820 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-79f7554c5d-dxxm7"] Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.784784 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spnnl\" (UniqueName: \"kubernetes.io/projected/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-kube-api-access-spnnl\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.784941 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-config-data-custom\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.785008 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.785031 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx4n8\" (UniqueName: \"kubernetes.io/projected/422684e4-6de9-44af-9684-9cc724395af6-kube-api-access-lx4n8\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.785071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-combined-ca-bundle\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.785109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-combined-ca-bundle\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.785210 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data-custom\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.785304 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-config-data\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.806180 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-config-data-custom\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.808868 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-config-data\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.817955 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spnnl\" (UniqueName: \"kubernetes.io/projected/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-kube-api-access-spnnl\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.819192 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-combined-ca-bundle\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887161 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6rhk\" (UniqueName: \"kubernetes.io/projected/64841624-ecc9-4a68-b2f8-294f328c7ce3-kube-api-access-v6rhk\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887231 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887305 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data-custom\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887352 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887371 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx4n8\" (UniqueName: \"kubernetes.io/projected/422684e4-6de9-44af-9684-9cc724395af6-kube-api-access-lx4n8\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887406 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-combined-ca-bundle\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887443 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data-custom\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887472 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-combined-ca-bundle\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.892106 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data-custom\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.892146 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.893363 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-combined-ca-bundle\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.909442 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.910606 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx4n8\" (UniqueName: \"kubernetes.io/projected/422684e4-6de9-44af-9684-9cc724395af6-kube-api-access-lx4n8\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.980651 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.988991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data-custom\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.989120 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-combined-ca-bundle\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.989150 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6rhk\" (UniqueName: \"kubernetes.io/projected/64841624-ecc9-4a68-b2f8-294f328c7ce3-kube-api-access-v6rhk\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.989192 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.992955 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data-custom\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.996110 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.996959 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-combined-ca-bundle\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.008388 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6rhk\" (UniqueName: \"kubernetes.io/projected/64841624-ecc9-4a68-b2f8-294f328c7ce3-kube-api-access-v6rhk\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.081380 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.709182 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78db975b86-shg8k"] Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.724019 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6c98bf9957-8ctvl"] Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.732877 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-c9d48c7f5-9ghjf"] Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.734030 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.735995 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.736169 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.743450 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-c9d48c7f5-9ghjf"] Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.768207 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-d46b7f59f-tgv4t"] Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.769897 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.774913 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.775080 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.782559 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d46b7f59f-tgv4t"] Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.807072 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-config-data-custom\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.807111 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-public-tls-certs\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.807181 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-combined-ca-bundle\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.807201 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d7c5\" (UniqueName: \"kubernetes.io/projected/55b82e21-7221-4043-b9a8-5ac5853acaa1-kube-api-access-8d7c5\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.807251 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-config-data\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.807283 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-internal-tls-certs\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909315 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-config-data\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909392 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-combined-ca-bundle\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909424 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-internal-tls-certs\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909480 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-config-data\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909510 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-config-data-custom\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909547 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-public-tls-certs\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909565 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smwpg\" (UniqueName: \"kubernetes.io/projected/c1f86beb-e638-4e60-a435-b09e2c01e733-kube-api-access-smwpg\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909607 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-config-data-custom\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909624 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-public-tls-certs\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909682 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-combined-ca-bundle\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909697 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-internal-tls-certs\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909714 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d7c5\" (UniqueName: \"kubernetes.io/projected/55b82e21-7221-4043-b9a8-5ac5853acaa1-kube-api-access-8d7c5\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.916132 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-config-data-custom\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.917091 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-internal-tls-certs\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.924864 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-combined-ca-bundle\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.929770 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-config-data\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.930049 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-public-tls-certs\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.930697 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d7c5\" (UniqueName: \"kubernetes.io/projected/55b82e21-7221-4043-b9a8-5ac5853acaa1-kube-api-access-8d7c5\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.011100 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-public-tls-certs\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.011146 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smwpg\" (UniqueName: \"kubernetes.io/projected/c1f86beb-e638-4e60-a435-b09e2c01e733-kube-api-access-smwpg\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.011255 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-internal-tls-certs\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.011306 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-combined-ca-bundle\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.011350 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-config-data\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.011808 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-config-data-custom\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.014492 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-public-tls-certs\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.014739 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-combined-ca-bundle\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.015442 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-config-data-custom\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.015565 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-internal-tls-certs\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.019992 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-config-data\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.031012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smwpg\" (UniqueName: \"kubernetes.io/projected/c1f86beb-e638-4e60-a435-b09e2c01e733-kube-api-access-smwpg\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.057798 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.091121 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.657206 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" podUID="626f6f5d-6222-406e-a687-92b74b1c9def" containerName="heat-cfnapi" containerID="cri-o://bf999cf0087b347dc47a32148fa88821e0546ffc0fc80fd764755c63b7d8a115" gracePeriod=60 Feb 21 08:25:10 crc kubenswrapper[4820]: I0221 08:25:10.674561 4820 generic.go:334] "Generic (PLEG): container finished" podID="626f6f5d-6222-406e-a687-92b74b1c9def" containerID="bf999cf0087b347dc47a32148fa88821e0546ffc0fc80fd764755c63b7d8a115" exitCode=0 Feb 21 08:25:10 crc kubenswrapper[4820]: I0221 08:25:10.674704 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" event={"ID":"626f6f5d-6222-406e-a687-92b74b1c9def","Type":"ContainerDied","Data":"bf999cf0087b347dc47a32148fa88821e0546ffc0fc80fd764755c63b7d8a115"} Feb 21 08:25:11 crc kubenswrapper[4820]: I0221 08:25:11.630410 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5879b888bd-q5njq" podUID="d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.106:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:25:12 crc kubenswrapper[4820]: I0221 08:25:12.992731 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.111411 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-combined-ca-bundle\") pod \"626f6f5d-6222-406e-a687-92b74b1c9def\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.111561 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cj8b\" (UniqueName: \"kubernetes.io/projected/626f6f5d-6222-406e-a687-92b74b1c9def-kube-api-access-5cj8b\") pod \"626f6f5d-6222-406e-a687-92b74b1c9def\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.111655 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data\") pod \"626f6f5d-6222-406e-a687-92b74b1c9def\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.112127 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data-custom\") pod \"626f6f5d-6222-406e-a687-92b74b1c9def\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.126541 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626f6f5d-6222-406e-a687-92b74b1c9def-kube-api-access-5cj8b" (OuterVolumeSpecName: "kube-api-access-5cj8b") pod "626f6f5d-6222-406e-a687-92b74b1c9def" (UID: "626f6f5d-6222-406e-a687-92b74b1c9def"). InnerVolumeSpecName "kube-api-access-5cj8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.126666 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "626f6f5d-6222-406e-a687-92b74b1c9def" (UID: "626f6f5d-6222-406e-a687-92b74b1c9def"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.153217 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "626f6f5d-6222-406e-a687-92b74b1c9def" (UID: "626f6f5d-6222-406e-a687-92b74b1c9def"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.207313 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data" (OuterVolumeSpecName: "config-data") pod "626f6f5d-6222-406e-a687-92b74b1c9def" (UID: "626f6f5d-6222-406e-a687-92b74b1c9def"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.215479 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.215515 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cj8b\" (UniqueName: \"kubernetes.io/projected/626f6f5d-6222-406e-a687-92b74b1c9def-kube-api-access-5cj8b\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.215529 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.215537 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.355736 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7c5647759f-rwklt"] Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.369655 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-c9d48c7f5-9ghjf"] Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.393770 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d46b7f59f-tgv4t"] Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.428777 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7fbc8dc6-rvrvw"] Feb 21 08:25:13 crc kubenswrapper[4820]: W0221 08:25:13.430844 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64841624_ecc9_4a68_b2f8_294f328c7ce3.slice/crio-c189762cc570e10f89921aa65a2623f4b1a5e93ead0db68057c449af5c01f49f WatchSource:0}: Error finding container c189762cc570e10f89921aa65a2623f4b1a5e93ead0db68057c449af5c01f49f: Status 404 returned error can't find the container with id c189762cc570e10f89921aa65a2623f4b1a5e93ead0db68057c449af5c01f49f Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.446807 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-79f7554c5d-dxxm7"] Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.701157 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-78db975b86-shg8k" podUID="6eba12fb-3cb7-4830-9722-9c9e6ab46002" containerName="heat-api" containerID="cri-o://9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1" gracePeriod=60 Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.714300 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78db975b86-shg8k" event={"ID":"6eba12fb-3cb7-4830-9722-9c9e6ab46002","Type":"ContainerStarted","Data":"9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1"} Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.714347 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.714359 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c5647759f-rwklt" event={"ID":"422684e4-6de9-44af-9684-9cc724395af6","Type":"ContainerStarted","Data":"ef07e0c03be2d134b2f51dc8fc1906a244df0aab0718ffcaa30aac6f0d1acf91"} Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.714370 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79f7554c5d-dxxm7" event={"ID":"64841624-ecc9-4a68-b2f8-294f328c7ce3","Type":"ContainerStarted","Data":"c189762cc570e10f89921aa65a2623f4b1a5e93ead0db68057c449af5c01f49f"} Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.725826 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c9d48c7f5-9ghjf" event={"ID":"55b82e21-7221-4043-b9a8-5ac5853acaa1","Type":"ContainerStarted","Data":"255549b7249f1f90ac63836dd47adfe474769411d294df20d80dd46a6de64a12"} Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.727595 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-78db975b86-shg8k" podStartSLOduration=3.481834526 podStartE2EDuration="14.727575329s" podCreationTimestamp="2026-02-21 08:24:59 +0000 UTC" firstStartedPulling="2026-02-21 08:25:00.596493094 +0000 UTC m=+5875.629577292" lastFinishedPulling="2026-02-21 08:25:11.842233897 +0000 UTC m=+5886.875318095" observedRunningTime="2026-02-21 08:25:13.723850118 +0000 UTC m=+5888.756934316" watchObservedRunningTime="2026-02-21 08:25:13.727575329 +0000 UTC m=+5888.760659537" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.728209 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" event={"ID":"626f6f5d-6222-406e-a687-92b74b1c9def","Type":"ContainerDied","Data":"580727d40e4f676d3cfe0c8263cb0f5ca6fb6c011c3df97d62e9d7b25b57368c"} Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.728272 4820 scope.go:117] "RemoveContainer" containerID="bf999cf0087b347dc47a32148fa88821e0546ffc0fc80fd764755c63b7d8a115" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.728271 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.730580 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7fbc8dc6-rvrvw" event={"ID":"f98ac827-2c89-4d1b-afc3-a5bd668b5d60","Type":"ContainerStarted","Data":"2a1b2d15de70e7c7f7ff1c4fb21ff8ae7326cfda54af2d2ad996dcb3e37e6abd"} Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.733152 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" event={"ID":"c1f86beb-e638-4e60-a435-b09e2c01e733","Type":"ContainerStarted","Data":"fb5b8e08da6c0049ee03bd95f2b9b6aadfa4dea735ad1168d53153507b080c6b"} Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.808211 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6c98bf9957-8ctvl"] Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.818539 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6c98bf9957-8ctvl"] Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.712487 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="626f6f5d-6222-406e-a687-92b74b1c9def" path="/var/lib/kubelet/pods/626f6f5d-6222-406e-a687-92b74b1c9def/volumes" Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.756871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c5647759f-rwklt" event={"ID":"422684e4-6de9-44af-9684-9cc724395af6","Type":"ContainerStarted","Data":"bd3749495127625ec0880c48cfd9fe17d8523f6ea8bcf84108e35c4c943d376a"} Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.757068 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.759062 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79f7554c5d-dxxm7" event={"ID":"64841624-ecc9-4a68-b2f8-294f328c7ce3","Type":"ContainerStarted","Data":"76bacff54433606319f6320cc7021ae65ce92e972f8e89e2cb4a54f35bfbab16"} Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.771412 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c9d48c7f5-9ghjf" event={"ID":"55b82e21-7221-4043-b9a8-5ac5853acaa1","Type":"ContainerStarted","Data":"12a1b6cc00c6a599e5e9e9f51eafa296721c5391c5c8080e55a9dc942d5ebde8"} Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.776031 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7fbc8dc6-rvrvw" event={"ID":"f98ac827-2c89-4d1b-afc3-a5bd668b5d60","Type":"ContainerStarted","Data":"17decb4e274786f3b92565e5c3decf718376cb107e73d2452ad7162b9a1683a5"} Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.778720 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" event={"ID":"c1f86beb-e638-4e60-a435-b09e2c01e733","Type":"ContainerStarted","Data":"991b463bf4f27fff49ec8d5a638b88b0990c3d9bc28c012c1da88d4c22ff310b"} Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.860645 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7c5647759f-rwklt" podStartSLOduration=9.860623042 podStartE2EDuration="9.860623042s" podCreationTimestamp="2026-02-21 08:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:25:15.844805574 +0000 UTC m=+5890.877889772" watchObservedRunningTime="2026-02-21 08:25:15.860623042 +0000 UTC m=+5890.893707240" Feb 21 08:25:16 crc kubenswrapper[4820]: I0221 08:25:16.832606 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-79f7554c5d-dxxm7" podStartSLOduration=10.83258284 podStartE2EDuration="10.83258284s" podCreationTimestamp="2026-02-21 08:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:25:16.815146628 +0000 UTC m=+5891.848230836" watchObservedRunningTime="2026-02-21 08:25:16.83258284 +0000 UTC m=+5891.865667058" Feb 21 08:25:16 crc kubenswrapper[4820]: I0221 08:25:16.843869 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" podStartSLOduration=9.843849034 podStartE2EDuration="9.843849034s" podCreationTimestamp="2026-02-21 08:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:25:16.841376857 +0000 UTC m=+5891.874461055" watchObservedRunningTime="2026-02-21 08:25:16.843849034 +0000 UTC m=+5891.876933232" Feb 21 08:25:16 crc kubenswrapper[4820]: I0221 08:25:16.890652 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-c9d48c7f5-9ghjf" podStartSLOduration=9.890628361 podStartE2EDuration="9.890628361s" podCreationTimestamp="2026-02-21 08:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:25:16.86842612 +0000 UTC m=+5891.901510318" watchObservedRunningTime="2026-02-21 08:25:16.890628361 +0000 UTC m=+5891.923712569" Feb 21 08:25:16 crc kubenswrapper[4820]: I0221 08:25:16.892042 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7fbc8dc6-rvrvw" podStartSLOduration=10.892032609 podStartE2EDuration="10.892032609s" podCreationTimestamp="2026-02-21 08:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:25:16.882969393 +0000 UTC m=+5891.916053601" watchObservedRunningTime="2026-02-21 08:25:16.892032609 +0000 UTC m=+5891.925116807" Feb 21 08:25:16 crc kubenswrapper[4820]: I0221 08:25:16.910879 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:17 crc kubenswrapper[4820]: I0221 08:25:17.082704 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:17 crc kubenswrapper[4820]: I0221 08:25:17.811866 4820 generic.go:334] "Generic (PLEG): container finished" podID="422684e4-6de9-44af-9684-9cc724395af6" containerID="bd3749495127625ec0880c48cfd9fe17d8523f6ea8bcf84108e35c4c943d376a" exitCode=1 Feb 21 08:25:17 crc kubenswrapper[4820]: I0221 08:25:17.812002 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c5647759f-rwklt" event={"ID":"422684e4-6de9-44af-9684-9cc724395af6","Type":"ContainerDied","Data":"bd3749495127625ec0880c48cfd9fe17d8523f6ea8bcf84108e35c4c943d376a"} Feb 21 08:25:17 crc kubenswrapper[4820]: I0221 08:25:17.812747 4820 scope.go:117] "RemoveContainer" containerID="bd3749495127625ec0880c48cfd9fe17d8523f6ea8bcf84108e35c4c943d376a" Feb 21 08:25:18 crc kubenswrapper[4820]: I0221 08:25:18.058424 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:18 crc kubenswrapper[4820]: I0221 08:25:18.091724 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:18 crc kubenswrapper[4820]: I0221 08:25:18.349209 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:25:18 crc kubenswrapper[4820]: I0221 08:25:18.827589 4820 generic.go:334] "Generic (PLEG): container finished" podID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerID="76bacff54433606319f6320cc7021ae65ce92e972f8e89e2cb4a54f35bfbab16" exitCode=1 Feb 21 08:25:18 crc kubenswrapper[4820]: I0221 08:25:18.827655 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79f7554c5d-dxxm7" event={"ID":"64841624-ecc9-4a68-b2f8-294f328c7ce3","Type":"ContainerDied","Data":"76bacff54433606319f6320cc7021ae65ce92e972f8e89e2cb4a54f35bfbab16"} Feb 21 08:25:18 crc kubenswrapper[4820]: I0221 08:25:18.828635 4820 scope.go:117] "RemoveContainer" containerID="76bacff54433606319f6320cc7021ae65ce92e972f8e89e2cb4a54f35bfbab16" Feb 21 08:25:18 crc kubenswrapper[4820]: I0221 08:25:18.830622 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c5647759f-rwklt" event={"ID":"422684e4-6de9-44af-9684-9cc724395af6","Type":"ContainerStarted","Data":"7a450d3c591f7ab2d8d12f40055886bf26abdc1bc260b28a89d0dbfc9e3f3eb7"} Feb 21 08:25:19 crc kubenswrapper[4820]: I0221 08:25:19.806659 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:25:19 crc kubenswrapper[4820]: I0221 08:25:19.843431 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:20 crc kubenswrapper[4820]: I0221 08:25:20.095937 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:25:20 crc kubenswrapper[4820]: I0221 08:25:20.154377 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-547899c658-2788v"] Feb 21 08:25:20 crc kubenswrapper[4820]: I0221 08:25:20.154707 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-547899c658-2788v" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon-log" containerID="cri-o://31f0aa87caeedf0d07754a1c9bacdd7401160e05e285d884534c83101f67a23a" gracePeriod=30 Feb 21 08:25:20 crc kubenswrapper[4820]: I0221 08:25:20.154912 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-547899c658-2788v" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" containerID="cri-o://75dd932712359b9c384bfe3ca353a892eb8c5cc411b34053a1addd1db3cfb25c" gracePeriod=30 Feb 21 08:25:20 crc kubenswrapper[4820]: I0221 08:25:20.855348 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79f7554c5d-dxxm7" event={"ID":"64841624-ecc9-4a68-b2f8-294f328c7ce3","Type":"ContainerStarted","Data":"bf29bba5483173a5926ca22d4373cc490219d20fafdd835f34ed3749087c8610"} Feb 21 08:25:20 crc kubenswrapper[4820]: I0221 08:25:20.855899 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:21 crc kubenswrapper[4820]: I0221 08:25:21.396115 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:25:21 crc kubenswrapper[4820]: I0221 08:25:21.867757 4820 generic.go:334] "Generic (PLEG): container finished" podID="422684e4-6de9-44af-9684-9cc724395af6" containerID="7a450d3c591f7ab2d8d12f40055886bf26abdc1bc260b28a89d0dbfc9e3f3eb7" exitCode=1 Feb 21 08:25:21 crc kubenswrapper[4820]: I0221 08:25:21.867829 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c5647759f-rwklt" event={"ID":"422684e4-6de9-44af-9684-9cc724395af6","Type":"ContainerDied","Data":"7a450d3c591f7ab2d8d12f40055886bf26abdc1bc260b28a89d0dbfc9e3f3eb7"} Feb 21 08:25:21 crc kubenswrapper[4820]: I0221 08:25:21.867887 4820 scope.go:117] "RemoveContainer" containerID="bd3749495127625ec0880c48cfd9fe17d8523f6ea8bcf84108e35c4c943d376a" Feb 21 08:25:21 crc kubenswrapper[4820]: I0221 08:25:21.868700 4820 scope.go:117] "RemoveContainer" containerID="7a450d3c591f7ab2d8d12f40055886bf26abdc1bc260b28a89d0dbfc9e3f3eb7" Feb 21 08:25:21 crc kubenswrapper[4820]: E0221 08:25:21.869020 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7c5647759f-rwklt_openstack(422684e4-6de9-44af-9684-9cc724395af6)\"" pod="openstack/heat-cfnapi-7c5647759f-rwklt" podUID="422684e4-6de9-44af-9684-9cc724395af6" Feb 21 08:25:21 crc kubenswrapper[4820]: I0221 08:25:21.981346 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:22 crc kubenswrapper[4820]: I0221 08:25:22.878135 4820 scope.go:117] "RemoveContainer" containerID="7a450d3c591f7ab2d8d12f40055886bf26abdc1bc260b28a89d0dbfc9e3f3eb7" Feb 21 08:25:22 crc kubenswrapper[4820]: E0221 08:25:22.878692 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7c5647759f-rwklt_openstack(422684e4-6de9-44af-9684-9cc724395af6)\"" pod="openstack/heat-cfnapi-7c5647759f-rwklt" podUID="422684e4-6de9-44af-9684-9cc724395af6" Feb 21 08:25:22 crc kubenswrapper[4820]: I0221 08:25:22.879955 4820 generic.go:334] "Generic (PLEG): container finished" podID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerID="bf29bba5483173a5926ca22d4373cc490219d20fafdd835f34ed3749087c8610" exitCode=1 Feb 21 08:25:22 crc kubenswrapper[4820]: I0221 08:25:22.879980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79f7554c5d-dxxm7" event={"ID":"64841624-ecc9-4a68-b2f8-294f328c7ce3","Type":"ContainerDied","Data":"bf29bba5483173a5926ca22d4373cc490219d20fafdd835f34ed3749087c8610"} Feb 21 08:25:22 crc kubenswrapper[4820]: I0221 08:25:22.880008 4820 scope.go:117] "RemoveContainer" containerID="76bacff54433606319f6320cc7021ae65ce92e972f8e89e2cb4a54f35bfbab16" Feb 21 08:25:22 crc kubenswrapper[4820]: I0221 08:25:22.880347 4820 scope.go:117] "RemoveContainer" containerID="bf29bba5483173a5926ca22d4373cc490219d20fafdd835f34ed3749087c8610" Feb 21 08:25:22 crc kubenswrapper[4820]: E0221 08:25:22.880539 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-79f7554c5d-dxxm7_openstack(64841624-ecc9-4a68-b2f8-294f328c7ce3)\"" pod="openstack/heat-api-79f7554c5d-dxxm7" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" Feb 21 08:25:23 crc kubenswrapper[4820]: I0221 08:25:23.299819 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-547899c658-2788v" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.103:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:54136->10.217.1.103:8443: read: connection reset by peer" Feb 21 08:25:23 crc kubenswrapper[4820]: I0221 08:25:23.892579 4820 generic.go:334] "Generic (PLEG): container finished" podID="81b52673-da5b-421f-be4c-d5608c8d82df" containerID="75dd932712359b9c384bfe3ca353a892eb8c5cc411b34053a1addd1db3cfb25c" exitCode=0 Feb 21 08:25:23 crc kubenswrapper[4820]: I0221 08:25:23.892621 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547899c658-2788v" event={"ID":"81b52673-da5b-421f-be4c-d5608c8d82df","Type":"ContainerDied","Data":"75dd932712359b9c384bfe3ca353a892eb8c5cc411b34053a1addd1db3cfb25c"} Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.374899 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.440764 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7c5647759f-rwklt"] Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.499214 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.553670 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-79f7554c5d-dxxm7"] Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.856212 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.906101 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c5647759f-rwklt" event={"ID":"422684e4-6de9-44af-9684-9cc724395af6","Type":"ContainerDied","Data":"ef07e0c03be2d134b2f51dc8fc1906a244df0aab0718ffcaa30aac6f0d1acf91"} Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.906115 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.906149 4820 scope.go:117] "RemoveContainer" containerID="7a450d3c591f7ab2d8d12f40055886bf26abdc1bc260b28a89d0dbfc9e3f3eb7" Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.907982 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79f7554c5d-dxxm7" event={"ID":"64841624-ecc9-4a68-b2f8-294f328c7ce3","Type":"ContainerDied","Data":"c189762cc570e10f89921aa65a2623f4b1a5e93ead0db68057c449af5c01f49f"} Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.908015 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c189762cc570e10f89921aa65a2623f4b1a5e93ead0db68057c449af5c01f49f" Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.983418 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data-custom\") pod \"422684e4-6de9-44af-9684-9cc724395af6\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.983461 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data\") pod \"422684e4-6de9-44af-9684-9cc724395af6\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.983597 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx4n8\" (UniqueName: \"kubernetes.io/projected/422684e4-6de9-44af-9684-9cc724395af6-kube-api-access-lx4n8\") pod \"422684e4-6de9-44af-9684-9cc724395af6\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.983652 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-combined-ca-bundle\") pod \"422684e4-6de9-44af-9684-9cc724395af6\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.988929 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422684e4-6de9-44af-9684-9cc724395af6-kube-api-access-lx4n8" (OuterVolumeSpecName: "kube-api-access-lx4n8") pod "422684e4-6de9-44af-9684-9cc724395af6" (UID: "422684e4-6de9-44af-9684-9cc724395af6"). InnerVolumeSpecName "kube-api-access-lx4n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.990017 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "422684e4-6de9-44af-9684-9cc724395af6" (UID: "422684e4-6de9-44af-9684-9cc724395af6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.009576 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.013507 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "422684e4-6de9-44af-9684-9cc724395af6" (UID: "422684e4-6de9-44af-9684-9cc724395af6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.052626 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data" (OuterVolumeSpecName: "config-data") pod "422684e4-6de9-44af-9684-9cc724395af6" (UID: "422684e4-6de9-44af-9684-9cc724395af6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.085540 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-combined-ca-bundle\") pod \"64841624-ecc9-4a68-b2f8-294f328c7ce3\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.085703 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data\") pod \"64841624-ecc9-4a68-b2f8-294f328c7ce3\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.085815 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data-custom\") pod \"64841624-ecc9-4a68-b2f8-294f328c7ce3\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.085847 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6rhk\" (UniqueName: \"kubernetes.io/projected/64841624-ecc9-4a68-b2f8-294f328c7ce3-kube-api-access-v6rhk\") pod \"64841624-ecc9-4a68-b2f8-294f328c7ce3\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.086376 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.086400 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.086414 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx4n8\" (UniqueName: \"kubernetes.io/projected/422684e4-6de9-44af-9684-9cc724395af6-kube-api-access-lx4n8\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.086437 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.089457 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64841624-ecc9-4a68-b2f8-294f328c7ce3-kube-api-access-v6rhk" (OuterVolumeSpecName: "kube-api-access-v6rhk") pod "64841624-ecc9-4a68-b2f8-294f328c7ce3" (UID: "64841624-ecc9-4a68-b2f8-294f328c7ce3"). InnerVolumeSpecName "kube-api-access-v6rhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.090331 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "64841624-ecc9-4a68-b2f8-294f328c7ce3" (UID: "64841624-ecc9-4a68-b2f8-294f328c7ce3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.112122 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64841624-ecc9-4a68-b2f8-294f328c7ce3" (UID: "64841624-ecc9-4a68-b2f8-294f328c7ce3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.134967 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data" (OuterVolumeSpecName: "config-data") pod "64841624-ecc9-4a68-b2f8-294f328c7ce3" (UID: "64841624-ecc9-4a68-b2f8-294f328c7ce3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.187932 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.187968 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6rhk\" (UniqueName: \"kubernetes.io/projected/64841624-ecc9-4a68-b2f8-294f328c7ce3-kube-api-access-v6rhk\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.187977 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.187986 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.236102 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7c5647759f-rwklt"] Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.244597 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7c5647759f-rwklt"] Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.710206 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422684e4-6de9-44af-9684-9cc724395af6" path="/var/lib/kubelet/pods/422684e4-6de9-44af-9684-9cc724395af6/volumes" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.917083 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.944290 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-79f7554c5d-dxxm7"] Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.958197 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-79f7554c5d-dxxm7"] Feb 21 08:25:26 crc kubenswrapper[4820]: I0221 08:25:26.940393 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:27 crc kubenswrapper[4820]: I0221 08:25:27.011939 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-67969b55f7-j9b9h"] Feb 21 08:25:27 crc kubenswrapper[4820]: I0221 08:25:27.012505 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-67969b55f7-j9b9h" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" containerID="cri-o://44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" gracePeriod=60 Feb 21 08:25:27 crc kubenswrapper[4820]: I0221 08:25:27.781816 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" path="/var/lib/kubelet/pods/64841624-ecc9-4a68-b2f8-294f328c7ce3/volumes" Feb 21 08:25:28 crc kubenswrapper[4820]: I0221 08:25:28.979333 4820 scope.go:117] "RemoveContainer" containerID="8384371cb1cb59ce68f65650414ed9165b7cc3f363b2fda166fcb245381ffb64" Feb 21 08:25:29 crc kubenswrapper[4820]: I0221 08:25:29.072037 4820 scope.go:117] "RemoveContainer" containerID="9c8352c44b67eda0f166f0687429790e5bd49b1d98c898e2089a6c9be067a4f4" Feb 21 08:25:29 crc kubenswrapper[4820]: I0221 08:25:29.129062 4820 scope.go:117] "RemoveContainer" containerID="150cef9ed56fe3eb3dae1713514ca1727eaea3bed5edf04307bc072317b7eac1" Feb 21 08:25:29 crc kubenswrapper[4820]: I0221 08:25:29.444447 4820 scope.go:117] "RemoveContainer" containerID="2d67b7bb0de25794d2af04a8fdecff08fd5cb66963010072ec396cf1f0a89430" Feb 21 08:25:29 crc kubenswrapper[4820]: I0221 08:25:29.472311 4820 scope.go:117] "RemoveContainer" containerID="e88ec1f0511faea63b1b890af60d3ecbf225488e293807f27ac476bd20e4d2af" Feb 21 08:25:29 crc kubenswrapper[4820]: I0221 08:25:29.546071 4820 scope.go:117] "RemoveContainer" containerID="3e9323b3b0ecd38f4bd6801e5bdf943a91f811adc414d781d648c705fbf53dd9" Feb 21 08:25:29 crc kubenswrapper[4820]: I0221 08:25:29.719211 4820 scope.go:117] "RemoveContainer" containerID="fd2dfabc6a845c58169feb78a970683856b5e0b8c05305224b62a62196765d9f" Feb 21 08:25:29 crc kubenswrapper[4820]: E0221 08:25:29.783847 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:29 crc kubenswrapper[4820]: E0221 08:25:29.785578 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:29 crc kubenswrapper[4820]: E0221 08:25:29.787370 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:29 crc kubenswrapper[4820]: E0221 08:25:29.787404 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-67969b55f7-j9b9h" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:25:31 crc kubenswrapper[4820]: I0221 08:25:31.557787 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-547899c658-2788v" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.103:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.103:8443: connect: connection refused" Feb 21 08:25:39 crc kubenswrapper[4820]: E0221 08:25:39.783755 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:39 crc kubenswrapper[4820]: E0221 08:25:39.786142 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:39 crc kubenswrapper[4820]: E0221 08:25:39.787492 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:39 crc kubenswrapper[4820]: E0221 08:25:39.787562 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-67969b55f7-j9b9h" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:25:41 crc kubenswrapper[4820]: I0221 08:25:41.558302 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-547899c658-2788v" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.103:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.103:8443: connect: connection refused" Feb 21 08:25:41 crc kubenswrapper[4820]: I0221 08:25:41.558749 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-547899c658-2788v" Feb 21 08:25:43 crc kubenswrapper[4820]: I0221 08:25:43.755835 4820 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod626f6f5d-6222-406e-a687-92b74b1c9def"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod626f6f5d-6222-406e-a687-92b74b1c9def] : Timed out while waiting for systemd to remove kubepods-besteffort-pod626f6f5d_6222_406e_a687_92b74b1c9def.slice" Feb 21 08:25:49 crc kubenswrapper[4820]: E0221 08:25:49.785525 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:49 crc kubenswrapper[4820]: E0221 08:25:49.787822 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:49 crc kubenswrapper[4820]: E0221 08:25:49.791102 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:49 crc kubenswrapper[4820]: E0221 08:25:49.791174 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-67969b55f7-j9b9h" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.141405 4820 generic.go:334] "Generic (PLEG): container finished" podID="81b52673-da5b-421f-be4c-d5608c8d82df" containerID="31f0aa87caeedf0d07754a1c9bacdd7401160e05e285d884534c83101f67a23a" exitCode=137 Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.141485 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547899c658-2788v" event={"ID":"81b52673-da5b-421f-be4c-d5608c8d82df","Type":"ContainerDied","Data":"31f0aa87caeedf0d07754a1c9bacdd7401160e05e285d884534c83101f67a23a"} Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.257092 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547899c658-2788v" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.338691 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97qhz\" (UniqueName: \"kubernetes.io/projected/81b52673-da5b-421f-be4c-d5608c8d82df-kube-api-access-97qhz\") pod \"81b52673-da5b-421f-be4c-d5608c8d82df\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.338745 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-secret-key\") pod \"81b52673-da5b-421f-be4c-d5608c8d82df\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.338815 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-scripts\") pod \"81b52673-da5b-421f-be4c-d5608c8d82df\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.338866 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-config-data\") pod \"81b52673-da5b-421f-be4c-d5608c8d82df\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.338885 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-tls-certs\") pod \"81b52673-da5b-421f-be4c-d5608c8d82df\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.338940 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b52673-da5b-421f-be4c-d5608c8d82df-logs\") pod \"81b52673-da5b-421f-be4c-d5608c8d82df\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.339084 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-combined-ca-bundle\") pod \"81b52673-da5b-421f-be4c-d5608c8d82df\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.340206 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b52673-da5b-421f-be4c-d5608c8d82df-logs" (OuterVolumeSpecName: "logs") pod "81b52673-da5b-421f-be4c-d5608c8d82df" (UID: "81b52673-da5b-421f-be4c-d5608c8d82df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.344935 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "81b52673-da5b-421f-be4c-d5608c8d82df" (UID: "81b52673-da5b-421f-be4c-d5608c8d82df"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.344968 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b52673-da5b-421f-be4c-d5608c8d82df-kube-api-access-97qhz" (OuterVolumeSpecName: "kube-api-access-97qhz") pod "81b52673-da5b-421f-be4c-d5608c8d82df" (UID: "81b52673-da5b-421f-be4c-d5608c8d82df"). InnerVolumeSpecName "kube-api-access-97qhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.364431 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-scripts" (OuterVolumeSpecName: "scripts") pod "81b52673-da5b-421f-be4c-d5608c8d82df" (UID: "81b52673-da5b-421f-be4c-d5608c8d82df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.367938 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-config-data" (OuterVolumeSpecName: "config-data") pod "81b52673-da5b-421f-be4c-d5608c8d82df" (UID: "81b52673-da5b-421f-be4c-d5608c8d82df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.376634 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81b52673-da5b-421f-be4c-d5608c8d82df" (UID: "81b52673-da5b-421f-be4c-d5608c8d82df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.390504 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "81b52673-da5b-421f-be4c-d5608c8d82df" (UID: "81b52673-da5b-421f-be4c-d5608c8d82df"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.441150 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.441180 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.441190 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.441201 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b52673-da5b-421f-be4c-d5608c8d82df-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.441219 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.441230 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97qhz\" (UniqueName: \"kubernetes.io/projected/81b52673-da5b-421f-be4c-d5608c8d82df-kube-api-access-97qhz\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.441242 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:52 crc kubenswrapper[4820]: I0221 08:25:52.154607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547899c658-2788v" event={"ID":"81b52673-da5b-421f-be4c-d5608c8d82df","Type":"ContainerDied","Data":"eff2d04aa677852d296ff8fc2a98555932014b77b70e9d62fecd2afd6b553dbd"} Feb 21 08:25:52 crc kubenswrapper[4820]: I0221 08:25:52.154680 4820 scope.go:117] "RemoveContainer" containerID="75dd932712359b9c384bfe3ca353a892eb8c5cc411b34053a1addd1db3cfb25c" Feb 21 08:25:52 crc kubenswrapper[4820]: I0221 08:25:52.154745 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547899c658-2788v" Feb 21 08:25:52 crc kubenswrapper[4820]: I0221 08:25:52.183868 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-547899c658-2788v"] Feb 21 08:25:52 crc kubenswrapper[4820]: I0221 08:25:52.192472 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-547899c658-2788v"] Feb 21 08:25:52 crc kubenswrapper[4820]: I0221 08:25:52.352813 4820 scope.go:117] "RemoveContainer" containerID="31f0aa87caeedf0d07754a1c9bacdd7401160e05e285d884534c83101f67a23a" Feb 21 08:25:53 crc kubenswrapper[4820]: I0221 08:25:53.707522 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" path="/var/lib/kubelet/pods/81b52673-da5b-421f-be4c-d5608c8d82df/volumes" Feb 21 08:25:59 crc kubenswrapper[4820]: E0221 08:25:59.784305 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:59 crc kubenswrapper[4820]: E0221 08:25:59.787020 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:59 crc kubenswrapper[4820]: E0221 08:25:59.788480 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:59 crc kubenswrapper[4820]: E0221 08:25:59.788524 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-67969b55f7-j9b9h" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:26:03 crc kubenswrapper[4820]: I0221 08:26:03.055069 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-w9mkt"] Feb 21 08:26:03 crc kubenswrapper[4820]: I0221 08:26:03.070016 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-30a5-account-create-update-vlqzg"] Feb 21 08:26:03 crc kubenswrapper[4820]: I0221 08:26:03.072490 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-w9mkt"] Feb 21 08:26:03 crc kubenswrapper[4820]: I0221 08:26:03.078783 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-30a5-account-create-update-vlqzg"] Feb 21 08:26:03 crc kubenswrapper[4820]: I0221 08:26:03.710871 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="526239a3-9756-4dd4-9e38-6474bd1b2709" path="/var/lib/kubelet/pods/526239a3-9756-4dd4-9e38-6474bd1b2709/volumes" Feb 21 08:26:03 crc kubenswrapper[4820]: I0221 08:26:03.711532 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b19f4a26-20d3-44b1-a159-3fd72a92e68f" path="/var/lib/kubelet/pods/b19f4a26-20d3-44b1-a159-3fd72a92e68f/volumes" Feb 21 08:26:09 crc kubenswrapper[4820]: E0221 08:26:09.784445 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:26:09 crc kubenswrapper[4820]: E0221 08:26:09.786367 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:26:09 crc kubenswrapper[4820]: E0221 08:26:09.787646 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:26:09 crc kubenswrapper[4820]: E0221 08:26:09.787722 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-67969b55f7-j9b9h" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:26:13 crc kubenswrapper[4820]: I0221 08:26:13.816214 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:26:13 crc kubenswrapper[4820]: I0221 08:26:13.816862 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.280621 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.301341 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llnrx\" (UniqueName: \"kubernetes.io/projected/6eba12fb-3cb7-4830-9722-9c9e6ab46002-kube-api-access-llnrx\") pod \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.301389 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-combined-ca-bundle\") pod \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.301455 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data-custom\") pod \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.301524 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data\") pod \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.319783 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6eba12fb-3cb7-4830-9722-9c9e6ab46002" (UID: "6eba12fb-3cb7-4830-9722-9c9e6ab46002"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.319872 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eba12fb-3cb7-4830-9722-9c9e6ab46002-kube-api-access-llnrx" (OuterVolumeSpecName: "kube-api-access-llnrx") pod "6eba12fb-3cb7-4830-9722-9c9e6ab46002" (UID: "6eba12fb-3cb7-4830-9722-9c9e6ab46002"). InnerVolumeSpecName "kube-api-access-llnrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.342832 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6eba12fb-3cb7-4830-9722-9c9e6ab46002" (UID: "6eba12fb-3cb7-4830-9722-9c9e6ab46002"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.365222 4820 generic.go:334] "Generic (PLEG): container finished" podID="6eba12fb-3cb7-4830-9722-9c9e6ab46002" containerID="9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1" exitCode=137 Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.365397 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78db975b86-shg8k" event={"ID":"6eba12fb-3cb7-4830-9722-9c9e6ab46002","Type":"ContainerDied","Data":"9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1"} Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.365431 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78db975b86-shg8k" event={"ID":"6eba12fb-3cb7-4830-9722-9c9e6ab46002","Type":"ContainerDied","Data":"3a9dfdc3820a237d6264d844940fa8cdbf49d936716d3113044384c4c3c6c2ab"} Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.365451 4820 scope.go:117] "RemoveContainer" containerID="9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.365592 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.368973 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data" (OuterVolumeSpecName: "config-data") pod "6eba12fb-3cb7-4830-9722-9c9e6ab46002" (UID: "6eba12fb-3cb7-4830-9722-9c9e6ab46002"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.404182 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llnrx\" (UniqueName: \"kubernetes.io/projected/6eba12fb-3cb7-4830-9722-9c9e6ab46002-kube-api-access-llnrx\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.404216 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.404224 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.404248 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.422488 4820 scope.go:117] "RemoveContainer" containerID="9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1" Feb 21 08:26:14 crc kubenswrapper[4820]: E0221 08:26:14.423506 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1\": container with ID starting with 9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1 not found: ID does not exist" containerID="9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.423551 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1"} err="failed to get container status \"9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1\": rpc error: code = NotFound desc = could not find container \"9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1\": container with ID starting with 9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1 not found: ID does not exist" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.702800 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78db975b86-shg8k"] Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.711317 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-78db975b86-shg8k"] Feb 21 08:26:15 crc kubenswrapper[4820]: I0221 08:26:15.706998 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eba12fb-3cb7-4830-9722-9c9e6ab46002" path="/var/lib/kubelet/pods/6eba12fb-3cb7-4830-9722-9c9e6ab46002/volumes" Feb 21 08:26:19 crc kubenswrapper[4820]: E0221 08:26:19.784373 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:26:19 crc kubenswrapper[4820]: E0221 08:26:19.786457 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:26:19 crc kubenswrapper[4820]: E0221 08:26:19.787602 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:26:19 crc kubenswrapper[4820]: E0221 08:26:19.787650 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-67969b55f7-j9b9h" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:26:27 crc kubenswrapper[4820]: I0221 08:26:27.482790 4820 generic.go:334] "Generic (PLEG): container finished" podID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" exitCode=137 Feb 21 08:26:27 crc kubenswrapper[4820]: I0221 08:26:27.483075 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-67969b55f7-j9b9h" event={"ID":"5d104918-3b6f-4543-9ca3-0ae595be78a2","Type":"ContainerDied","Data":"44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5"} Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.029071 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.214813 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data\") pod \"5d104918-3b6f-4543-9ca3-0ae595be78a2\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.215213 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9gdb\" (UniqueName: \"kubernetes.io/projected/5d104918-3b6f-4543-9ca3-0ae595be78a2-kube-api-access-s9gdb\") pod \"5d104918-3b6f-4543-9ca3-0ae595be78a2\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.215370 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-combined-ca-bundle\") pod \"5d104918-3b6f-4543-9ca3-0ae595be78a2\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.215935 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data-custom\") pod \"5d104918-3b6f-4543-9ca3-0ae595be78a2\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.222619 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d104918-3b6f-4543-9ca3-0ae595be78a2-kube-api-access-s9gdb" (OuterVolumeSpecName: "kube-api-access-s9gdb") pod "5d104918-3b6f-4543-9ca3-0ae595be78a2" (UID: "5d104918-3b6f-4543-9ca3-0ae595be78a2"). InnerVolumeSpecName "kube-api-access-s9gdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.227063 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5d104918-3b6f-4543-9ca3-0ae595be78a2" (UID: "5d104918-3b6f-4543-9ca3-0ae595be78a2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.259125 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d104918-3b6f-4543-9ca3-0ae595be78a2" (UID: "5d104918-3b6f-4543-9ca3-0ae595be78a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.279344 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data" (OuterVolumeSpecName: "config-data") pod "5d104918-3b6f-4543-9ca3-0ae595be78a2" (UID: "5d104918-3b6f-4543-9ca3-0ae595be78a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.318985 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9gdb\" (UniqueName: \"kubernetes.io/projected/5d104918-3b6f-4543-9ca3-0ae595be78a2-kube-api-access-s9gdb\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.319020 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.319032 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.319041 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.493967 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-67969b55f7-j9b9h" event={"ID":"5d104918-3b6f-4543-9ca3-0ae595be78a2","Type":"ContainerDied","Data":"dbf92b41a5a660ca8950c0f9f25b1cc977262a38e0d285fc433fc069c9b1436e"} Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.494022 4820 scope.go:117] "RemoveContainer" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.494057 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.540101 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-67969b55f7-j9b9h"] Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.552868 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-67969b55f7-j9b9h"] Feb 21 08:26:29 crc kubenswrapper[4820]: I0221 08:26:29.707681 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" path="/var/lib/kubelet/pods/5d104918-3b6f-4543-9ca3-0ae595be78a2/volumes" Feb 21 08:26:30 crc kubenswrapper[4820]: I0221 08:26:30.046607 4820 scope.go:117] "RemoveContainer" containerID="9cc15cd98cb2ee5a66c67dae3b7781ebaef37c8edf2a66d0058beed46e459cfa" Feb 21 08:26:30 crc kubenswrapper[4820]: I0221 08:26:30.070504 4820 scope.go:117] "RemoveContainer" containerID="2ecad43a902d533086cc0d59299eabdf5fed0eb7581600161e0b6b859242cab9" Feb 21 08:26:43 crc kubenswrapper[4820]: I0221 08:26:43.816788 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:26:43 crc kubenswrapper[4820]: I0221 08:26:43.817316 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:26:47 crc kubenswrapper[4820]: I0221 08:26:47.049059 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-6fhr4"] Feb 21 08:26:47 crc kubenswrapper[4820]: I0221 08:26:47.058921 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-6fhr4"] Feb 21 08:26:47 crc kubenswrapper[4820]: I0221 08:26:47.707868 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="918975eb-d5b2-4b0e-9b35-36e92f03527b" path="/var/lib/kubelet/pods/918975eb-d5b2-4b0e-9b35-36e92f03527b/volumes" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.693832 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm"] Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694712 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422684e4-6de9-44af-9684-9cc724395af6" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694724 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="422684e4-6de9-44af-9684-9cc724395af6" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694737 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694742 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694751 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon-log" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694757 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon-log" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694773 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eba12fb-3cb7-4830-9722-9c9e6ab46002" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694779 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eba12fb-3cb7-4830-9722-9c9e6ab46002" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694788 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694793 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694803 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626f6f5d-6222-406e-a687-92b74b1c9def" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694808 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="626f6f5d-6222-406e-a687-92b74b1c9def" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694815 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694821 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694834 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694839 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695020 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695036 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695044 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="422684e4-6de9-44af-9684-9cc724395af6" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695053 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695065 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eba12fb-3cb7-4830-9722-9c9e6ab46002" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695076 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon-log" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695087 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695097 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="626f6f5d-6222-406e-a687-92b74b1c9def" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.695305 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422684e4-6de9-44af-9684-9cc724395af6" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695314 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="422684e4-6de9-44af-9684-9cc724395af6" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695467 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="422684e4-6de9-44af-9684-9cc724395af6" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.696615 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.698565 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.714615 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm"] Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.849431 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.849640 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn9tt\" (UniqueName: \"kubernetes.io/projected/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-kube-api-access-mn9tt\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.849731 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.951158 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.951261 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn9tt\" (UniqueName: \"kubernetes.io/projected/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-kube-api-access-mn9tt\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.951308 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.951672 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.954621 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.979257 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn9tt\" (UniqueName: \"kubernetes.io/projected/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-kube-api-access-mn9tt\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:55 crc kubenswrapper[4820]: I0221 08:26:55.017209 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:55 crc kubenswrapper[4820]: I0221 08:26:55.489331 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm"] Feb 21 08:26:55 crc kubenswrapper[4820]: I0221 08:26:55.732889 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" event={"ID":"f69dedde-7358-4e63-b7b3-cc4ff8c1258e","Type":"ContainerStarted","Data":"85424f5cdfcfb76e8fff54bef331768051ba60377ac6180166407b6433b8ab48"} Feb 21 08:26:55 crc kubenswrapper[4820]: I0221 08:26:55.732944 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" event={"ID":"f69dedde-7358-4e63-b7b3-cc4ff8c1258e","Type":"ContainerStarted","Data":"56b26cad348917df612c29e568c13f49ca3e99f4177ad8528f53287a99fbb6a7"} Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.052672 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6lcfz"] Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.055277 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.065466 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lcfz"] Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.173391 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6495j\" (UniqueName: \"kubernetes.io/projected/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-kube-api-access-6495j\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.173474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-catalog-content\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.173587 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-utilities\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.275394 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-catalog-content\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.275461 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-utilities\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.275625 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6495j\" (UniqueName: \"kubernetes.io/projected/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-kube-api-access-6495j\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.275893 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-catalog-content\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.276126 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-utilities\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.298199 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6495j\" (UniqueName: \"kubernetes.io/projected/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-kube-api-access-6495j\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.375699 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.743155 4820 generic.go:334] "Generic (PLEG): container finished" podID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerID="85424f5cdfcfb76e8fff54bef331768051ba60377ac6180166407b6433b8ab48" exitCode=0 Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.743196 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" event={"ID":"f69dedde-7358-4e63-b7b3-cc4ff8c1258e","Type":"ContainerDied","Data":"85424f5cdfcfb76e8fff54bef331768051ba60377ac6180166407b6433b8ab48"} Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.870499 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lcfz"] Feb 21 08:26:56 crc kubenswrapper[4820]: W0221 08:26:56.873489 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf332fd92_9ed1_4d69_95ec_fcfc12cbd311.slice/crio-f093c6f9044badc9c1e1fb7dc3bec6af3e91999aee22d48558181b058fe99b57 WatchSource:0}: Error finding container f093c6f9044badc9c1e1fb7dc3bec6af3e91999aee22d48558181b058fe99b57: Status 404 returned error can't find the container with id f093c6f9044badc9c1e1fb7dc3bec6af3e91999aee22d48558181b058fe99b57 Feb 21 08:26:57 crc kubenswrapper[4820]: I0221 08:26:57.755444 4820 generic.go:334] "Generic (PLEG): container finished" podID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerID="fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0" exitCode=0 Feb 21 08:26:57 crc kubenswrapper[4820]: I0221 08:26:57.755570 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lcfz" event={"ID":"f332fd92-9ed1-4d69-95ec-fcfc12cbd311","Type":"ContainerDied","Data":"fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0"} Feb 21 08:26:57 crc kubenswrapper[4820]: I0221 08:26:57.755899 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lcfz" event={"ID":"f332fd92-9ed1-4d69-95ec-fcfc12cbd311","Type":"ContainerStarted","Data":"f093c6f9044badc9c1e1fb7dc3bec6af3e91999aee22d48558181b058fe99b57"} Feb 21 08:26:59 crc kubenswrapper[4820]: I0221 08:26:59.775326 4820 generic.go:334] "Generic (PLEG): container finished" podID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerID="55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389" exitCode=0 Feb 21 08:26:59 crc kubenswrapper[4820]: I0221 08:26:59.775414 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lcfz" event={"ID":"f332fd92-9ed1-4d69-95ec-fcfc12cbd311","Type":"ContainerDied","Data":"55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389"} Feb 21 08:26:59 crc kubenswrapper[4820]: I0221 08:26:59.777856 4820 generic.go:334] "Generic (PLEG): container finished" podID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerID="2e9e7218200e546cc33414ed26da51e8bc2c5aab353917218a694e404cd445fa" exitCode=0 Feb 21 08:26:59 crc kubenswrapper[4820]: I0221 08:26:59.777895 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" event={"ID":"f69dedde-7358-4e63-b7b3-cc4ff8c1258e","Type":"ContainerDied","Data":"2e9e7218200e546cc33414ed26da51e8bc2c5aab353917218a694e404cd445fa"} Feb 21 08:27:00 crc kubenswrapper[4820]: I0221 08:27:00.788537 4820 generic.go:334] "Generic (PLEG): container finished" podID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerID="47e76bf7469e806b7b5d94074b78221815a4943a23c3a994ffe77f1f76615bf1" exitCode=0 Feb 21 08:27:00 crc kubenswrapper[4820]: I0221 08:27:00.788610 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" event={"ID":"f69dedde-7358-4e63-b7b3-cc4ff8c1258e","Type":"ContainerDied","Data":"47e76bf7469e806b7b5d94074b78221815a4943a23c3a994ffe77f1f76615bf1"} Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.159247 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.192657 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn9tt\" (UniqueName: \"kubernetes.io/projected/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-kube-api-access-mn9tt\") pod \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.192751 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-bundle\") pod \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.192795 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-util\") pod \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.199421 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-bundle" (OuterVolumeSpecName: "bundle") pod "f69dedde-7358-4e63-b7b3-cc4ff8c1258e" (UID: "f69dedde-7358-4e63-b7b3-cc4ff8c1258e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.203475 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-util" (OuterVolumeSpecName: "util") pod "f69dedde-7358-4e63-b7b3-cc4ff8c1258e" (UID: "f69dedde-7358-4e63-b7b3-cc4ff8c1258e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.208730 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-kube-api-access-mn9tt" (OuterVolumeSpecName: "kube-api-access-mn9tt") pod "f69dedde-7358-4e63-b7b3-cc4ff8c1258e" (UID: "f69dedde-7358-4e63-b7b3-cc4ff8c1258e"). InnerVolumeSpecName "kube-api-access-mn9tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.295344 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn9tt\" (UniqueName: \"kubernetes.io/projected/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-kube-api-access-mn9tt\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.295388 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.295399 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-util\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.811194 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" event={"ID":"f69dedde-7358-4e63-b7b3-cc4ff8c1258e","Type":"ContainerDied","Data":"56b26cad348917df612c29e568c13f49ca3e99f4177ad8528f53287a99fbb6a7"} Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.811589 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56b26cad348917df612c29e568c13f49ca3e99f4177ad8528f53287a99fbb6a7" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.811314 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.814002 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lcfz" event={"ID":"f332fd92-9ed1-4d69-95ec-fcfc12cbd311","Type":"ContainerStarted","Data":"5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05"} Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.847465 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6lcfz" podStartSLOduration=3.451932966 podStartE2EDuration="6.847444697s" podCreationTimestamp="2026-02-21 08:26:56 +0000 UTC" firstStartedPulling="2026-02-21 08:26:57.75750161 +0000 UTC m=+5992.790585808" lastFinishedPulling="2026-02-21 08:27:01.153013341 +0000 UTC m=+5996.186097539" observedRunningTime="2026-02-21 08:27:02.841916937 +0000 UTC m=+5997.875001205" watchObservedRunningTime="2026-02-21 08:27:02.847444697 +0000 UTC m=+5997.880528895" Feb 21 08:27:06 crc kubenswrapper[4820]: I0221 08:27:06.376602 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:27:06 crc kubenswrapper[4820]: I0221 08:27:06.376692 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:27:07 crc kubenswrapper[4820]: I0221 08:27:07.429369 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6lcfz" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="registry-server" probeResult="failure" output=< Feb 21 08:27:07 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:27:07 crc kubenswrapper[4820]: > Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.288859 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9"] Feb 21 08:27:13 crc kubenswrapper[4820]: E0221 08:27:13.289749 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerName="util" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.289766 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerName="util" Feb 21 08:27:13 crc kubenswrapper[4820]: E0221 08:27:13.289783 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerName="extract" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.289792 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerName="extract" Feb 21 08:27:13 crc kubenswrapper[4820]: E0221 08:27:13.289801 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerName="pull" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.289808 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerName="pull" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.289978 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerName="extract" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.290624 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.293617 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-l79dr" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.294347 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.299450 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.371200 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9"] Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.372351 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qxfs\" (UniqueName: \"kubernetes.io/projected/b371e087-d814-4a0f-9ff3-d55d20e24544-kube-api-access-5qxfs\") pod \"obo-prometheus-operator-68bc856cb9-lw5b9\" (UID: \"b371e087-d814-4a0f-9ff3-d55d20e24544\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.475400 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv"] Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.476695 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.477189 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qxfs\" (UniqueName: \"kubernetes.io/projected/b371e087-d814-4a0f-9ff3-d55d20e24544-kube-api-access-5qxfs\") pod \"obo-prometheus-operator-68bc856cb9-lw5b9\" (UID: \"b371e087-d814-4a0f-9ff3-d55d20e24544\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.481636 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.489259 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-kv57d" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.507361 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl"] Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.508780 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.528821 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv"] Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.535108 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qxfs\" (UniqueName: \"kubernetes.io/projected/b371e087-d814-4a0f-9ff3-d55d20e24544-kube-api-access-5qxfs\") pod \"obo-prometheus-operator-68bc856cb9-lw5b9\" (UID: \"b371e087-d814-4a0f-9ff3-d55d20e24544\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.544692 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl"] Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.579581 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6c94be0a-30e4-454d-a744-be2161cdbed2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv\" (UID: \"6c94be0a-30e4-454d-a744-be2161cdbed2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.579663 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c94be0a-30e4-454d-a744-be2161cdbed2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv\" (UID: \"6c94be0a-30e4-454d-a744-be2161cdbed2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.608110 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.686403 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33a57c79-5f59-4436-802e-2be346a7f24b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-p7twl\" (UID: \"33a57c79-5f59-4436-802e-2be346a7f24b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.686726 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6c94be0a-30e4-454d-a744-be2161cdbed2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv\" (UID: \"6c94be0a-30e4-454d-a744-be2161cdbed2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.686770 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c94be0a-30e4-454d-a744-be2161cdbed2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv\" (UID: \"6c94be0a-30e4-454d-a744-be2161cdbed2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.686838 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33a57c79-5f59-4436-802e-2be346a7f24b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-p7twl\" (UID: \"33a57c79-5f59-4436-802e-2be346a7f24b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.742272 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c94be0a-30e4-454d-a744-be2161cdbed2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv\" (UID: \"6c94be0a-30e4-454d-a744-be2161cdbed2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.767799 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-t74mh"] Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.770336 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.774774 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-t2p9k" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.775145 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.784420 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6c94be0a-30e4-454d-a744-be2161cdbed2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv\" (UID: \"6c94be0a-30e4-454d-a744-be2161cdbed2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.800730 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33a57c79-5f59-4436-802e-2be346a7f24b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-p7twl\" (UID: \"33a57c79-5f59-4436-802e-2be346a7f24b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.800922 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33a57c79-5f59-4436-802e-2be346a7f24b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-p7twl\" (UID: \"33a57c79-5f59-4436-802e-2be346a7f24b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.803089 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.826346 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33a57c79-5f59-4436-802e-2be346a7f24b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-p7twl\" (UID: \"33a57c79-5f59-4436-802e-2be346a7f24b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.840173 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33a57c79-5f59-4436-802e-2be346a7f24b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-p7twl\" (UID: \"33a57c79-5f59-4436-802e-2be346a7f24b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.840422 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.840505 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.840563 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.841774 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.841855 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" gracePeriod=600 Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.867354 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-t74mh"] Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.890479 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.904846 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/dab6a090-8dce-4a3c-aa4a-467c37f77510-observability-operator-tls\") pod \"observability-operator-59bdc8b94-t74mh\" (UID: \"dab6a090-8dce-4a3c-aa4a-467c37f77510\") " pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.904947 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqc8q\" (UniqueName: \"kubernetes.io/projected/dab6a090-8dce-4a3c-aa4a-467c37f77510-kube-api-access-kqc8q\") pod \"observability-operator-59bdc8b94-t74mh\" (UID: \"dab6a090-8dce-4a3c-aa4a-467c37f77510\") " pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.015594 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/dab6a090-8dce-4a3c-aa4a-467c37f77510-observability-operator-tls\") pod \"observability-operator-59bdc8b94-t74mh\" (UID: \"dab6a090-8dce-4a3c-aa4a-467c37f77510\") " pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.015680 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqc8q\" (UniqueName: \"kubernetes.io/projected/dab6a090-8dce-4a3c-aa4a-467c37f77510-kube-api-access-kqc8q\") pod \"observability-operator-59bdc8b94-t74mh\" (UID: \"dab6a090-8dce-4a3c-aa4a-467c37f77510\") " pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.035118 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/dab6a090-8dce-4a3c-aa4a-467c37f77510-observability-operator-tls\") pod \"observability-operator-59bdc8b94-t74mh\" (UID: \"dab6a090-8dce-4a3c-aa4a-467c37f77510\") " pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.036899 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-m42j5"] Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.038574 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.044602 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqc8q\" (UniqueName: \"kubernetes.io/projected/dab6a090-8dce-4a3c-aa4a-467c37f77510-kube-api-access-kqc8q\") pod \"observability-operator-59bdc8b94-t74mh\" (UID: \"dab6a090-8dce-4a3c-aa4a-467c37f77510\") " pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.045854 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-6jwr4" Feb 21 08:27:14 crc kubenswrapper[4820]: E0221 08:27:14.064077 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.088255 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-m42j5"] Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.127426 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b286f\" (UniqueName: \"kubernetes.io/projected/33c8ba11-479e-4bbc-87c4-0d6da77be2eb-kube-api-access-b286f\") pod \"perses-operator-5bf474d74f-m42j5\" (UID: \"33c8ba11-479e-4bbc-87c4-0d6da77be2eb\") " pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.127498 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/33c8ba11-479e-4bbc-87c4-0d6da77be2eb-openshift-service-ca\") pod \"perses-operator-5bf474d74f-m42j5\" (UID: \"33c8ba11-479e-4bbc-87c4-0d6da77be2eb\") " pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.230263 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b286f\" (UniqueName: \"kubernetes.io/projected/33c8ba11-479e-4bbc-87c4-0d6da77be2eb-kube-api-access-b286f\") pod \"perses-operator-5bf474d74f-m42j5\" (UID: \"33c8ba11-479e-4bbc-87c4-0d6da77be2eb\") " pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.230334 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/33c8ba11-479e-4bbc-87c4-0d6da77be2eb-openshift-service-ca\") pod \"perses-operator-5bf474d74f-m42j5\" (UID: \"33c8ba11-479e-4bbc-87c4-0d6da77be2eb\") " pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.231602 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/33c8ba11-479e-4bbc-87c4-0d6da77be2eb-openshift-service-ca\") pod \"perses-operator-5bf474d74f-m42j5\" (UID: \"33c8ba11-479e-4bbc-87c4-0d6da77be2eb\") " pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.239738 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.254017 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b286f\" (UniqueName: \"kubernetes.io/projected/33c8ba11-479e-4bbc-87c4-0d6da77be2eb-kube-api-access-b286f\") pod \"perses-operator-5bf474d74f-m42j5\" (UID: \"33c8ba11-479e-4bbc-87c4-0d6da77be2eb\") " pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.417872 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.538542 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9"] Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.840829 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-t74mh"] Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.029733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" event={"ID":"b371e087-d814-4a0f-9ff3-d55d20e24544","Type":"ContainerStarted","Data":"7015609bff7430f06327a8ed8bf9116a2e41fa5800d00a2e59f28260896a0992"} Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.031204 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-t74mh" event={"ID":"dab6a090-8dce-4a3c-aa4a-467c37f77510","Type":"ContainerStarted","Data":"08802462c79e54eee01dc2c372789702e44754a9a562f9db57071148537845a0"} Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.038509 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" exitCode=0 Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.038563 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8"} Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.038601 4820 scope.go:117] "RemoveContainer" containerID="71790bda67cb32788b4b805eefed34727cfa5df22b69da5f6508ea1c43987bd6" Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.039390 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:27:15 crc kubenswrapper[4820]: E0221 08:27:15.039682 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.062721 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv"] Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.156736 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl"] Feb 21 08:27:15 crc kubenswrapper[4820]: W0221 08:27:15.195225 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33c8ba11_479e_4bbc_87c4_0d6da77be2eb.slice/crio-8b562735e9fbd3f0a9d2b4f587a2404e872eed4c9394d96139edb51578ab3920 WatchSource:0}: Error finding container 8b562735e9fbd3f0a9d2b4f587a2404e872eed4c9394d96139edb51578ab3920: Status 404 returned error can't find the container with id 8b562735e9fbd3f0a9d2b4f587a2404e872eed4c9394d96139edb51578ab3920 Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.197643 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-m42j5"] Feb 21 08:27:16 crc kubenswrapper[4820]: I0221 08:27:16.084635 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" event={"ID":"6c94be0a-30e4-454d-a744-be2161cdbed2","Type":"ContainerStarted","Data":"367a001a345c6bc9fa61f82a2a495381a595891df22324fa588e7d18241d8d65"} Feb 21 08:27:16 crc kubenswrapper[4820]: I0221 08:27:16.095701 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" event={"ID":"33a57c79-5f59-4436-802e-2be346a7f24b","Type":"ContainerStarted","Data":"4983dc7ff215f46fafdaf55c8453068d55e0623b53c7a09c5b3cb62da5ac5ebe"} Feb 21 08:27:16 crc kubenswrapper[4820]: I0221 08:27:16.098359 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-m42j5" event={"ID":"33c8ba11-479e-4bbc-87c4-0d6da77be2eb","Type":"ContainerStarted","Data":"8b562735e9fbd3f0a9d2b4f587a2404e872eed4c9394d96139edb51578ab3920"} Feb 21 08:27:16 crc kubenswrapper[4820]: I0221 08:27:16.477574 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:27:16 crc kubenswrapper[4820]: I0221 08:27:16.599041 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:27:16 crc kubenswrapper[4820]: I0221 08:27:16.811532 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lcfz"] Feb 21 08:27:18 crc kubenswrapper[4820]: I0221 08:27:18.134078 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6lcfz" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="registry-server" containerID="cri-o://5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05" gracePeriod=2 Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.100520 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.173261 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-utilities\") pod \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.173356 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6495j\" (UniqueName: \"kubernetes.io/projected/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-kube-api-access-6495j\") pod \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.173535 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-catalog-content\") pod \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.174471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-utilities" (OuterVolumeSpecName: "utilities") pod "f332fd92-9ed1-4d69-95ec-fcfc12cbd311" (UID: "f332fd92-9ed1-4d69-95ec-fcfc12cbd311"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.196784 4820 generic.go:334] "Generic (PLEG): container finished" podID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerID="5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05" exitCode=0 Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.196861 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lcfz" event={"ID":"f332fd92-9ed1-4d69-95ec-fcfc12cbd311","Type":"ContainerDied","Data":"5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05"} Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.196895 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lcfz" event={"ID":"f332fd92-9ed1-4d69-95ec-fcfc12cbd311","Type":"ContainerDied","Data":"f093c6f9044badc9c1e1fb7dc3bec6af3e91999aee22d48558181b058fe99b57"} Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.196935 4820 scope.go:117] "RemoveContainer" containerID="5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.196969 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.199417 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-kube-api-access-6495j" (OuterVolumeSpecName: "kube-api-access-6495j") pod "f332fd92-9ed1-4d69-95ec-fcfc12cbd311" (UID: "f332fd92-9ed1-4d69-95ec-fcfc12cbd311"). InnerVolumeSpecName "kube-api-access-6495j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.264908 4820 scope.go:117] "RemoveContainer" containerID="55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.277608 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.277661 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6495j\" (UniqueName: \"kubernetes.io/projected/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-kube-api-access-6495j\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.351618 4820 scope.go:117] "RemoveContainer" containerID="fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.370290 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f332fd92-9ed1-4d69-95ec-fcfc12cbd311" (UID: "f332fd92-9ed1-4d69-95ec-fcfc12cbd311"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.381445 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.412845 4820 scope.go:117] "RemoveContainer" containerID="5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05" Feb 21 08:27:19 crc kubenswrapper[4820]: E0221 08:27:19.416603 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05\": container with ID starting with 5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05 not found: ID does not exist" containerID="5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.416699 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05"} err="failed to get container status \"5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05\": rpc error: code = NotFound desc = could not find container \"5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05\": container with ID starting with 5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05 not found: ID does not exist" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.416724 4820 scope.go:117] "RemoveContainer" containerID="55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389" Feb 21 08:27:19 crc kubenswrapper[4820]: E0221 08:27:19.417283 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389\": container with ID starting with 55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389 not found: ID does not exist" containerID="55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.417331 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389"} err="failed to get container status \"55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389\": rpc error: code = NotFound desc = could not find container \"55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389\": container with ID starting with 55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389 not found: ID does not exist" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.417369 4820 scope.go:117] "RemoveContainer" containerID="fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0" Feb 21 08:27:19 crc kubenswrapper[4820]: E0221 08:27:19.417678 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0\": container with ID starting with fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0 not found: ID does not exist" containerID="fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.417698 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0"} err="failed to get container status \"fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0\": rpc error: code = NotFound desc = could not find container \"fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0\": container with ID starting with fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0 not found: ID does not exist" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.548542 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lcfz"] Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.564572 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6lcfz"] Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.721473 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" path="/var/lib/kubelet/pods/f332fd92-9ed1-4d69-95ec-fcfc12cbd311/volumes" Feb 21 08:27:28 crc kubenswrapper[4820]: I0221 08:27:28.696954 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:27:28 crc kubenswrapper[4820]: E0221 08:27:28.697976 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:27:30 crc kubenswrapper[4820]: I0221 08:27:30.230161 4820 scope.go:117] "RemoveContainer" containerID="8b6311f31356ce76831ef1e643a71519f1d4135a662667153af1b1ec2bf2c1c0" Feb 21 08:27:32 crc kubenswrapper[4820]: I0221 08:27:32.513780 4820 scope.go:117] "RemoveContainer" containerID="ffd6e0717429942441d6739f7446e83992338ace5a92acdad1687015e926114e" Feb 21 08:27:32 crc kubenswrapper[4820]: I0221 08:27:32.559616 4820 scope.go:117] "RemoveContainer" containerID="396aa495a2b94c68ded63dc96a4fdc14015bda68ab667126a1a74b0cac6ba50e" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.341085 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" event={"ID":"33a57c79-5f59-4436-802e-2be346a7f24b","Type":"ContainerStarted","Data":"86a283e87458f5d1a54d6236bb48aabebaff5f0ac07afa684994d780e05d8e05"} Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.343333 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-m42j5" event={"ID":"33c8ba11-479e-4bbc-87c4-0d6da77be2eb","Type":"ContainerStarted","Data":"468f81479571c143114337829a200469a5c2b406004527517a69a2cb318af9b9"} Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.343451 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.346230 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-t74mh" event={"ID":"dab6a090-8dce-4a3c-aa4a-467c37f77510","Type":"ContainerStarted","Data":"bdfa84ae7fda1bab5214e775f544d97c62b594e23ab2cae7af41dd027a20dfcd"} Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.346456 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.349504 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" event={"ID":"6c94be0a-30e4-454d-a744-be2161cdbed2","Type":"ContainerStarted","Data":"3768f8907766bbe5324d9699bf6291d24f5cd455f7c4fd81d1090166fc18cc9a"} Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.351981 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" event={"ID":"b371e087-d814-4a0f-9ff3-d55d20e24544","Type":"ContainerStarted","Data":"8cf336a0ca61eca31ad876313caba0a4ba4afd24a27dd6443be2d2bc47b2fb6e"} Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.364982 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" podStartSLOduration=2.992854642 podStartE2EDuration="20.364965988s" podCreationTimestamp="2026-02-21 08:27:13 +0000 UTC" firstStartedPulling="2026-02-21 08:27:15.164693366 +0000 UTC m=+6010.197777564" lastFinishedPulling="2026-02-21 08:27:32.536804712 +0000 UTC m=+6027.569888910" observedRunningTime="2026-02-21 08:27:33.361482625 +0000 UTC m=+6028.394566823" watchObservedRunningTime="2026-02-21 08:27:33.364965988 +0000 UTC m=+6028.398050176" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.386953 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" podStartSLOduration=2.432265477 podStartE2EDuration="20.38693474s" podCreationTimestamp="2026-02-21 08:27:13 +0000 UTC" firstStartedPulling="2026-02-21 08:27:14.584168953 +0000 UTC m=+6009.617253151" lastFinishedPulling="2026-02-21 08:27:32.538838206 +0000 UTC m=+6027.571922414" observedRunningTime="2026-02-21 08:27:33.378661347 +0000 UTC m=+6028.411745545" watchObservedRunningTime="2026-02-21 08:27:33.38693474 +0000 UTC m=+6028.420018928" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.390933 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.413034 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-m42j5" podStartSLOduration=3.095223512 podStartE2EDuration="20.413010603s" podCreationTimestamp="2026-02-21 08:27:13 +0000 UTC" firstStartedPulling="2026-02-21 08:27:15.198717744 +0000 UTC m=+6010.231801942" lastFinishedPulling="2026-02-21 08:27:32.516504835 +0000 UTC m=+6027.549589033" observedRunningTime="2026-02-21 08:27:33.40103006 +0000 UTC m=+6028.434114258" watchObservedRunningTime="2026-02-21 08:27:33.413010603 +0000 UTC m=+6028.446094811" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.444711 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" podStartSLOduration=3.031319778 podStartE2EDuration="20.444691816s" podCreationTimestamp="2026-02-21 08:27:13 +0000 UTC" firstStartedPulling="2026-02-21 08:27:15.101115613 +0000 UTC m=+6010.134199811" lastFinishedPulling="2026-02-21 08:27:32.514487651 +0000 UTC m=+6027.547571849" observedRunningTime="2026-02-21 08:27:33.443137665 +0000 UTC m=+6028.476221883" watchObservedRunningTime="2026-02-21 08:27:33.444691816 +0000 UTC m=+6028.477776014" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.478655 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-t74mh" podStartSLOduration=2.767550022 podStartE2EDuration="20.478634841s" podCreationTimestamp="2026-02-21 08:27:13 +0000 UTC" firstStartedPulling="2026-02-21 08:27:14.848584709 +0000 UTC m=+6009.881668907" lastFinishedPulling="2026-02-21 08:27:32.559669528 +0000 UTC m=+6027.592753726" observedRunningTime="2026-02-21 08:27:33.475079746 +0000 UTC m=+6028.508163954" watchObservedRunningTime="2026-02-21 08:27:33.478634841 +0000 UTC m=+6028.511719039" Feb 21 08:27:39 crc kubenswrapper[4820]: I0221 08:27:39.697363 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:27:39 crc kubenswrapper[4820]: E0221 08:27:39.698204 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:27:44 crc kubenswrapper[4820]: I0221 08:27:44.420855 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.143824 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.144342 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" containerName="openstackclient" containerID="cri-o://d2a5a3b2cd722605c77544d2b55b04c162a515d379ad4f861603c967fcd87469" gracePeriod=2 Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.159478 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.216309 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 21 08:27:48 crc kubenswrapper[4820]: E0221 08:27:48.218343 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" containerName="openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.218374 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" containerName="openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: E0221 08:27:48.218388 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="extract-content" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.218396 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="extract-content" Feb 21 08:27:48 crc kubenswrapper[4820]: E0221 08:27:48.218421 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="registry-server" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.218429 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="registry-server" Feb 21 08:27:48 crc kubenswrapper[4820]: E0221 08:27:48.218470 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="extract-utilities" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.218480 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="extract-utilities" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.218685 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" containerName="openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.218713 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="registry-server" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.219464 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.243716 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.270668 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" podUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.375436 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.375543 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.375605 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjp8t\" (UniqueName: \"kubernetes.io/projected/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-kube-api-access-fjp8t\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.375649 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config-secret\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.415278 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.419572 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.427580 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vsmcb" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.434349 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.478283 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjp8t\" (UniqueName: \"kubernetes.io/projected/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-kube-api-access-fjp8t\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.478345 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config-secret\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.478420 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.478461 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xxlc\" (UniqueName: \"kubernetes.io/projected/153a0123-545b-4694-8e22-ef2a97ec9939-kube-api-access-5xxlc\") pod \"kube-state-metrics-0\" (UID: \"153a0123-545b-4694-8e22-ef2a97ec9939\") " pod="openstack/kube-state-metrics-0" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.478493 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.480200 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.489882 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config-secret\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.492266 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.512835 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjp8t\" (UniqueName: \"kubernetes.io/projected/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-kube-api-access-fjp8t\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.555336 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.696924 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xxlc\" (UniqueName: \"kubernetes.io/projected/153a0123-545b-4694-8e22-ef2a97ec9939-kube-api-access-5xxlc\") pod \"kube-state-metrics-0\" (UID: \"153a0123-545b-4694-8e22-ef2a97ec9939\") " pod="openstack/kube-state-metrics-0" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.750094 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xxlc\" (UniqueName: \"kubernetes.io/projected/153a0123-545b-4694-8e22-ef2a97ec9939-kube-api-access-5xxlc\") pod \"kube-state-metrics-0\" (UID: \"153a0123-545b-4694-8e22-ef2a97ec9939\") " pod="openstack/kube-state-metrics-0" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.761564 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.189374 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.191923 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.196492 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.196760 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.196916 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.197177 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-csq7d" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.197410 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.229330 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.325379 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.325448 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0042658c-e832-4073-894f-78a25bcdb5f9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.325485 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0042658c-e832-4073-894f-78a25bcdb5f9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.325570 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.325665 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.325696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dp5q\" (UniqueName: \"kubernetes.io/projected/0042658c-e832-4073-894f-78a25bcdb5f9-kube-api-access-7dp5q\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.325733 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0042658c-e832-4073-894f-78a25bcdb5f9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.436526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0042658c-e832-4073-894f-78a25bcdb5f9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.436576 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0042658c-e832-4073-894f-78a25bcdb5f9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.436650 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.436738 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.436755 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dp5q\" (UniqueName: \"kubernetes.io/projected/0042658c-e832-4073-894f-78a25bcdb5f9-kube-api-access-7dp5q\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.436787 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0042658c-e832-4073-894f-78a25bcdb5f9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.436816 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.437125 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0042658c-e832-4073-894f-78a25bcdb5f9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.442725 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.443053 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.446379 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0042658c-e832-4073-894f-78a25bcdb5f9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.446591 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.449071 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.476148 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0042658c-e832-4073-894f-78a25bcdb5f9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.493230 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dp5q\" (UniqueName: \"kubernetes.io/projected/0042658c-e832-4073-894f-78a25bcdb5f9-kube-api-access-7dp5q\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.538428 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609","Type":"ContainerStarted","Data":"6c917a75194f55f50fe2b10282b30fc6a5cdf4e8c4a51c57a5e5f0dd1fc7b3a1"} Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.553445 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.765974 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.780316 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.783482 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.785879 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.786116 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.787806 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.787990 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.788120 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.788281 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.788374 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.788479 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zxfvn" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.789831 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962020 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962508 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962619 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962765 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962835 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962899 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962949 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962991 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.963140 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.963266 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhfxn\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-kube-api-access-bhfxn\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065015 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065080 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065161 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065314 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065359 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065387 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065412 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065466 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065502 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhfxn\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-kube-api-access-bhfxn\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.066711 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.066982 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.067042 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.071876 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.071895 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.072958 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.074089 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.074516 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.076506 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.076546 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cd8dd8437aa8075cd51ba65607a645fc10f7b325eb32cc6b53f399eac5c08fb8/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.084997 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhfxn\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-kube-api-access-bhfxn\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.127306 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.197403 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.231819 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.551591 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0042658c-e832-4073-894f-78a25bcdb5f9","Type":"ContainerStarted","Data":"7ae037941122768c0878a7ea9d096cb1af11f4eef12dd5a6757839730b130eed"} Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.553842 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609","Type":"ContainerStarted","Data":"7771891f2fa08b33757d032c137a833eef19f7cdb411370f65f1082b88750265"} Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.555405 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"153a0123-545b-4694-8e22-ef2a97ec9939","Type":"ContainerStarted","Data":"6f3a59fdd346b4bf2cd6317827d2bf8f9f715934794135b9913b0326998f7186"} Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.559879 4820 generic.go:334] "Generic (PLEG): container finished" podID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" containerID="d2a5a3b2cd722605c77544d2b55b04c162a515d379ad4f861603c967fcd87469" exitCode=137 Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.576748 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.5767278620000003 podStartE2EDuration="2.576727862s" podCreationTimestamp="2026-02-21 08:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:27:50.567936785 +0000 UTC m=+6045.601020973" watchObservedRunningTime="2026-02-21 08:27:50.576727862 +0000 UTC m=+6045.609812060" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.655047 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:27:51 crc kubenswrapper[4820]: I0221 08:27:51.568536 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerStarted","Data":"8024b89ef022e9ea6ebea26e2dc95ed3b4eeb5984a56c0e3f69c3d705d4bb5c2"} Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.006788 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.119806 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config\") pod \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.119908 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config-secret\") pod \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.119933 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-combined-ca-bundle\") pod \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.119991 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnds4\" (UniqueName: \"kubernetes.io/projected/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-kube-api-access-nnds4\") pod \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.128546 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-kube-api-access-nnds4" (OuterVolumeSpecName: "kube-api-access-nnds4") pod "0690f7f6-8a8e-4c10-92b5-31640a2a46b1" (UID: "0690f7f6-8a8e-4c10-92b5-31640a2a46b1"). InnerVolumeSpecName "kube-api-access-nnds4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.157581 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0690f7f6-8a8e-4c10-92b5-31640a2a46b1" (UID: "0690f7f6-8a8e-4c10-92b5-31640a2a46b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.169675 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0690f7f6-8a8e-4c10-92b5-31640a2a46b1" (UID: "0690f7f6-8a8e-4c10-92b5-31640a2a46b1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.178859 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0690f7f6-8a8e-4c10-92b5-31640a2a46b1" (UID: "0690f7f6-8a8e-4c10-92b5-31640a2a46b1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.222354 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.222388 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.222415 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.222423 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnds4\" (UniqueName: \"kubernetes.io/projected/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-kube-api-access-nnds4\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.585607 4820 scope.go:117] "RemoveContainer" containerID="d2a5a3b2cd722605c77544d2b55b04c162a515d379ad4f861603c967fcd87469" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.585646 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.612977 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" podUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.712794 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" path="/var/lib/kubelet/pods/0690f7f6-8a8e-4c10-92b5-31640a2a46b1/volumes" Feb 21 08:27:54 crc kubenswrapper[4820]: I0221 08:27:54.595535 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"153a0123-545b-4694-8e22-ef2a97ec9939","Type":"ContainerStarted","Data":"4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1"} Feb 21 08:27:54 crc kubenswrapper[4820]: I0221 08:27:54.596140 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 21 08:27:54 crc kubenswrapper[4820]: I0221 08:27:54.614363 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.369216561 podStartE2EDuration="6.614318973s" podCreationTimestamp="2026-02-21 08:27:48 +0000 UTC" firstStartedPulling="2026-02-21 08:27:49.80292633 +0000 UTC m=+6044.836010528" lastFinishedPulling="2026-02-21 08:27:54.048028742 +0000 UTC m=+6049.081112940" observedRunningTime="2026-02-21 08:27:54.610281104 +0000 UTC m=+6049.643365312" watchObservedRunningTime="2026-02-21 08:27:54.614318973 +0000 UTC m=+6049.647403181" Feb 21 08:27:54 crc kubenswrapper[4820]: I0221 08:27:54.696899 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:27:54 crc kubenswrapper[4820]: E0221 08:27:54.697172 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:28:00 crc kubenswrapper[4820]: I0221 08:28:00.652591 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0042658c-e832-4073-894f-78a25bcdb5f9","Type":"ContainerStarted","Data":"f7fcb20c2e1cf93c058ef96cbad6dcec64dbeb08a35e6ebd9ceb4005d27678da"} Feb 21 08:28:01 crc kubenswrapper[4820]: I0221 08:28:01.664433 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerStarted","Data":"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294"} Feb 21 08:28:07 crc kubenswrapper[4820]: I0221 08:28:07.697566 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:28:07 crc kubenswrapper[4820]: E0221 08:28:07.698891 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:28:07 crc kubenswrapper[4820]: I0221 08:28:07.725521 4820 generic.go:334] "Generic (PLEG): container finished" podID="0042658c-e832-4073-894f-78a25bcdb5f9" containerID="f7fcb20c2e1cf93c058ef96cbad6dcec64dbeb08a35e6ebd9ceb4005d27678da" exitCode=0 Feb 21 08:28:07 crc kubenswrapper[4820]: I0221 08:28:07.725614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0042658c-e832-4073-894f-78a25bcdb5f9","Type":"ContainerDied","Data":"f7fcb20c2e1cf93c058ef96cbad6dcec64dbeb08a35e6ebd9ceb4005d27678da"} Feb 21 08:28:07 crc kubenswrapper[4820]: I0221 08:28:07.733655 4820 generic.go:334] "Generic (PLEG): container finished" podID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerID="6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294" exitCode=0 Feb 21 08:28:07 crc kubenswrapper[4820]: I0221 08:28:07.733706 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerDied","Data":"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294"} Feb 21 08:28:08 crc kubenswrapper[4820]: I0221 08:28:08.766456 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.134181 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-px47t"] Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.137898 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.152755 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-px47t"] Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.197276 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-utilities\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.197392 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqgtb\" (UniqueName: \"kubernetes.io/projected/ff694654-0a77-4fcd-86a3-af752c869359-kube-api-access-tqgtb\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.197493 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-catalog-content\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.300379 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-utilities\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.300939 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-utilities\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.301008 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqgtb\" (UniqueName: \"kubernetes.io/projected/ff694654-0a77-4fcd-86a3-af752c869359-kube-api-access-tqgtb\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.301086 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-catalog-content\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.301550 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-catalog-content\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.348136 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqgtb\" (UniqueName: \"kubernetes.io/projected/ff694654-0a77-4fcd-86a3-af752c869359-kube-api-access-tqgtb\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.459164 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.823808 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerStarted","Data":"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10"} Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.832641 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0042658c-e832-4073-894f-78a25bcdb5f9","Type":"ContainerStarted","Data":"06d78d811c45dafd03edd611aefbbf2dada0ef016a3223fc95882794fd36d330"} Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.964762 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-px47t"] Feb 21 08:28:14 crc kubenswrapper[4820]: W0221 08:28:14.973370 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff694654_0a77_4fcd_86a3_af752c869359.slice/crio-3e61fae0439d5f606a5f026c6da465f0c1a96e5b3ce271ea0442957ff02140e2 WatchSource:0}: Error finding container 3e61fae0439d5f606a5f026c6da465f0c1a96e5b3ce271ea0442957ff02140e2: Status 404 returned error can't find the container with id 3e61fae0439d5f606a5f026c6da465f0c1a96e5b3ce271ea0442957ff02140e2 Feb 21 08:28:15 crc kubenswrapper[4820]: I0221 08:28:15.843790 4820 generic.go:334] "Generic (PLEG): container finished" podID="ff694654-0a77-4fcd-86a3-af752c869359" containerID="dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58" exitCode=0 Feb 21 08:28:15 crc kubenswrapper[4820]: I0221 08:28:15.843989 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px47t" event={"ID":"ff694654-0a77-4fcd-86a3-af752c869359","Type":"ContainerDied","Data":"dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58"} Feb 21 08:28:15 crc kubenswrapper[4820]: I0221 08:28:15.844411 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px47t" event={"ID":"ff694654-0a77-4fcd-86a3-af752c869359","Type":"ContainerStarted","Data":"3e61fae0439d5f606a5f026c6da465f0c1a96e5b3ce271ea0442957ff02140e2"} Feb 21 08:28:15 crc kubenswrapper[4820]: I0221 08:28:15.930296 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ffmqr"] Feb 21 08:28:15 crc kubenswrapper[4820]: I0221 08:28:15.932801 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:15 crc kubenswrapper[4820]: I0221 08:28:15.942294 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ffmqr"] Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.039149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-utilities\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.039429 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8qb4\" (UniqueName: \"kubernetes.io/projected/7a562943-0b50-4684-bfae-b185088ff6ba-kube-api-access-c8qb4\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.039486 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-catalog-content\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.140727 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8qb4\" (UniqueName: \"kubernetes.io/projected/7a562943-0b50-4684-bfae-b185088ff6ba-kube-api-access-c8qb4\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.141046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-catalog-content\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.141254 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-utilities\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.141621 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-utilities\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.141842 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-catalog-content\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.162599 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8qb4\" (UniqueName: \"kubernetes.io/projected/7a562943-0b50-4684-bfae-b185088ff6ba-kube-api-access-c8qb4\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.299045 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.828599 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ffmqr"] Feb 21 08:28:16 crc kubenswrapper[4820]: W0221 08:28:16.828746 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a562943_0b50_4684_bfae_b185088ff6ba.slice/crio-c6420d40aa16470899c936d5d897eeac108ecf0214d52d9f25b52f68283a6ec3 WatchSource:0}: Error finding container c6420d40aa16470899c936d5d897eeac108ecf0214d52d9f25b52f68283a6ec3: Status 404 returned error can't find the container with id c6420d40aa16470899c936d5d897eeac108ecf0214d52d9f25b52f68283a6ec3 Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.855122 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px47t" event={"ID":"ff694654-0a77-4fcd-86a3-af752c869359","Type":"ContainerStarted","Data":"acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457"} Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.856720 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffmqr" event={"ID":"7a562943-0b50-4684-bfae-b185088ff6ba","Type":"ContainerStarted","Data":"c6420d40aa16470899c936d5d897eeac108ecf0214d52d9f25b52f68283a6ec3"} Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.335711 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zwx2p"] Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.348228 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.386209 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-catalog-content\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.386548 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkrlt\" (UniqueName: \"kubernetes.io/projected/de6c7e38-55e9-4696-9f00-f7774a9e1410-kube-api-access-bkrlt\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.387132 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-utilities\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.413994 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwx2p"] Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.490335 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-utilities\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.490460 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-catalog-content\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.490530 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkrlt\" (UniqueName: \"kubernetes.io/projected/de6c7e38-55e9-4696-9f00-f7774a9e1410-kube-api-access-bkrlt\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.491341 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-utilities\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.492572 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-catalog-content\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.512175 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkrlt\" (UniqueName: \"kubernetes.io/projected/de6c7e38-55e9-4696-9f00-f7774a9e1410-kube-api-access-bkrlt\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.679257 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.866610 4820 generic.go:334] "Generic (PLEG): container finished" podID="7a562943-0b50-4684-bfae-b185088ff6ba" containerID="85adee0a70bdaf76950c48c8d16085221f5d902409bb24b4d68c953cf5c5a182" exitCode=0 Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.866783 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffmqr" event={"ID":"7a562943-0b50-4684-bfae-b185088ff6ba","Type":"ContainerDied","Data":"85adee0a70bdaf76950c48c8d16085221f5d902409bb24b4d68c953cf5c5a182"} Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.878881 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerStarted","Data":"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15"} Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.390837 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwx2p"] Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.891212 4820 generic.go:334] "Generic (PLEG): container finished" podID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerID="6e177d00a243df83ea338ac14dee0c063928e5ffb6918fbecb57fd977a88e413" exitCode=0 Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.891291 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwx2p" event={"ID":"de6c7e38-55e9-4696-9f00-f7774a9e1410","Type":"ContainerDied","Data":"6e177d00a243df83ea338ac14dee0c063928e5ffb6918fbecb57fd977a88e413"} Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.891603 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwx2p" event={"ID":"de6c7e38-55e9-4696-9f00-f7774a9e1410","Type":"ContainerStarted","Data":"8ca2ef79583171317b60a42c2026737ba15070b89b3825567b51aa143c5c24b8"} Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.899802 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0042658c-e832-4073-894f-78a25bcdb5f9","Type":"ContainerStarted","Data":"04f8db3d84f2c39089205e963ed81f647af5d177ba7c89996c8a2d19d0fcf26e"} Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.900095 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.907754 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.946670 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.912060485 podStartE2EDuration="29.946647164s" podCreationTimestamp="2026-02-21 08:27:49 +0000 UTC" firstStartedPulling="2026-02-21 08:27:50.241849218 +0000 UTC m=+6045.274933406" lastFinishedPulling="2026-02-21 08:28:14.276435897 +0000 UTC m=+6069.309520085" observedRunningTime="2026-02-21 08:28:18.938187766 +0000 UTC m=+6073.971271974" watchObservedRunningTime="2026-02-21 08:28:18.946647164 +0000 UTC m=+6073.979731362" Feb 21 08:28:19 crc kubenswrapper[4820]: I0221 08:28:19.919572 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwx2p" event={"ID":"de6c7e38-55e9-4696-9f00-f7774a9e1410","Type":"ContainerStarted","Data":"defbd84a660f2b863ed690d803b797edc42cfd55fb3311d91751cdbcc2ec68b0"} Feb 21 08:28:19 crc kubenswrapper[4820]: I0221 08:28:19.924404 4820 generic.go:334] "Generic (PLEG): container finished" podID="ff694654-0a77-4fcd-86a3-af752c869359" containerID="acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457" exitCode=0 Feb 21 08:28:19 crc kubenswrapper[4820]: I0221 08:28:19.924476 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px47t" event={"ID":"ff694654-0a77-4fcd-86a3-af752c869359","Type":"ContainerDied","Data":"acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457"} Feb 21 08:28:19 crc kubenswrapper[4820]: I0221 08:28:19.928832 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffmqr" event={"ID":"7a562943-0b50-4684-bfae-b185088ff6ba","Type":"ContainerStarted","Data":"d9081c463552fa518c5d538e73ab78a2558358a5c0bb2c4b0449fe1106f5b8ef"} Feb 21 08:28:22 crc kubenswrapper[4820]: I0221 08:28:22.980743 4820 generic.go:334] "Generic (PLEG): container finished" podID="7a562943-0b50-4684-bfae-b185088ff6ba" containerID="d9081c463552fa518c5d538e73ab78a2558358a5c0bb2c4b0449fe1106f5b8ef" exitCode=0 Feb 21 08:28:22 crc kubenswrapper[4820]: I0221 08:28:22.980835 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffmqr" event={"ID":"7a562943-0b50-4684-bfae-b185088ff6ba","Type":"ContainerDied","Data":"d9081c463552fa518c5d538e73ab78a2558358a5c0bb2c4b0449fe1106f5b8ef"} Feb 21 08:28:22 crc kubenswrapper[4820]: I0221 08:28:22.983621 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerStarted","Data":"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9"} Feb 21 08:28:22 crc kubenswrapper[4820]: I0221 08:28:22.984481 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:28:22 crc kubenswrapper[4820]: I0221 08:28:22.986142 4820 generic.go:334] "Generic (PLEG): container finished" podID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerID="defbd84a660f2b863ed690d803b797edc42cfd55fb3311d91751cdbcc2ec68b0" exitCode=0 Feb 21 08:28:22 crc kubenswrapper[4820]: I0221 08:28:22.986209 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwx2p" event={"ID":"de6c7e38-55e9-4696-9f00-f7774a9e1410","Type":"ContainerDied","Data":"defbd84a660f2b863ed690d803b797edc42cfd55fb3311d91751cdbcc2ec68b0"} Feb 21 08:28:22 crc kubenswrapper[4820]: I0221 08:28:22.988109 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px47t" event={"ID":"ff694654-0a77-4fcd-86a3-af752c869359","Type":"ContainerStarted","Data":"b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6"} Feb 21 08:28:23 crc kubenswrapper[4820]: I0221 08:28:23.028544 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-px47t" podStartSLOduration=3.201742345 podStartE2EDuration="9.028526949s" podCreationTimestamp="2026-02-21 08:28:14 +0000 UTC" firstStartedPulling="2026-02-21 08:28:15.845439506 +0000 UTC m=+6070.878523704" lastFinishedPulling="2026-02-21 08:28:21.67222411 +0000 UTC m=+6076.705308308" observedRunningTime="2026-02-21 08:28:23.021665294 +0000 UTC m=+6078.054749502" watchObservedRunningTime="2026-02-21 08:28:23.028526949 +0000 UTC m=+6078.061611147" Feb 21 08:28:23 crc kubenswrapper[4820]: I0221 08:28:23.089355 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.54632791 podStartE2EDuration="35.089335687s" podCreationTimestamp="2026-02-21 08:27:48 +0000 UTC" firstStartedPulling="2026-02-21 08:27:50.666912212 +0000 UTC m=+6045.699996420" lastFinishedPulling="2026-02-21 08:28:22.209919999 +0000 UTC m=+6077.243004197" observedRunningTime="2026-02-21 08:28:23.073569622 +0000 UTC m=+6078.106653820" watchObservedRunningTime="2026-02-21 08:28:23.089335687 +0000 UTC m=+6078.122419875" Feb 21 08:28:23 crc kubenswrapper[4820]: I0221 08:28:23.696913 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:28:23 crc kubenswrapper[4820]: E0221 08:28:23.697400 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:28:24 crc kubenswrapper[4820]: I0221 08:28:24.003026 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffmqr" event={"ID":"7a562943-0b50-4684-bfae-b185088ff6ba","Type":"ContainerStarted","Data":"3bd5e985ee9159378765b592858683ff7721fb499b1997bc7c81039a1009a79d"} Feb 21 08:28:24 crc kubenswrapper[4820]: I0221 08:28:24.005282 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwx2p" event={"ID":"de6c7e38-55e9-4696-9f00-f7774a9e1410","Type":"ContainerStarted","Data":"95d16e2b913fed2d61bc0f263eb8f38889071e4b9bdbb9c548ac13518ff990f9"} Feb 21 08:28:24 crc kubenswrapper[4820]: I0221 08:28:24.040421 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ffmqr" podStartSLOduration=3.465512779 podStartE2EDuration="9.040402215s" podCreationTimestamp="2026-02-21 08:28:15 +0000 UTC" firstStartedPulling="2026-02-21 08:28:17.868492391 +0000 UTC m=+6072.901576589" lastFinishedPulling="2026-02-21 08:28:23.443381827 +0000 UTC m=+6078.476466025" observedRunningTime="2026-02-21 08:28:24.027575409 +0000 UTC m=+6079.060659607" watchObservedRunningTime="2026-02-21 08:28:24.040402215 +0000 UTC m=+6079.073486413" Feb 21 08:28:24 crc kubenswrapper[4820]: I0221 08:28:24.086677 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zwx2p" podStartSLOduration=2.570815123 podStartE2EDuration="7.086652871s" podCreationTimestamp="2026-02-21 08:28:17 +0000 UTC" firstStartedPulling="2026-02-21 08:28:18.894509149 +0000 UTC m=+6073.927593347" lastFinishedPulling="2026-02-21 08:28:23.410346897 +0000 UTC m=+6078.443431095" observedRunningTime="2026-02-21 08:28:24.046212031 +0000 UTC m=+6079.079296229" watchObservedRunningTime="2026-02-21 08:28:24.086652871 +0000 UTC m=+6079.119737069" Feb 21 08:28:24 crc kubenswrapper[4820]: I0221 08:28:24.459473 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:24 crc kubenswrapper[4820]: I0221 08:28:24.459731 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.053396 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mkg7q"] Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.065757 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9c0c-account-create-update-bf5w2"] Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.079157 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mkg7q"] Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.088667 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9c0c-account-create-update-bf5w2"] Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.198025 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.504788 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-px47t" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="registry-server" probeResult="failure" output=< Feb 21 08:28:25 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:28:25 crc kubenswrapper[4820]: > Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.719800 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3214fb7b-d651-4bd3-a75b-a9995693fc60" path="/var/lib/kubelet/pods/3214fb7b-d651-4bd3-a75b-a9995693fc60/volumes" Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.720645 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19c6e2c-81cf-472e-babb-fb9cf7bf052b" path="/var/lib/kubelet/pods/d19c6e2c-81cf-472e-babb-fb9cf7bf052b/volumes" Feb 21 08:28:26 crc kubenswrapper[4820]: I0221 08:28:26.300133 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:26 crc kubenswrapper[4820]: I0221 08:28:26.300467 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.353363 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ffmqr" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="registry-server" probeResult="failure" output=< Feb 21 08:28:27 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:28:27 crc kubenswrapper[4820]: > Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.466395 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.473405 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.481161 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.482346 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.488680 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.605865 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-log-httpd\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.605910 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-run-httpd\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.605935 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-config-data\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.606071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-scripts\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.606102 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26m8j\" (UniqueName: \"kubernetes.io/projected/042d4af3-fd72-450a-a2e8-e296886b495a-kube-api-access-26m8j\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.606187 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.606232 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.680140 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.680198 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.708568 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-log-httpd\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.708630 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-run-httpd\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.708667 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-config-data\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.708795 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-scripts\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.708824 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26m8j\" (UniqueName: \"kubernetes.io/projected/042d4af3-fd72-450a-a2e8-e296886b495a-kube-api-access-26m8j\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.708893 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.708937 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.709418 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-log-httpd\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.709509 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-run-httpd\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.723318 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.729134 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-config-data\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.729810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.743262 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-scripts\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.786177 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.788310 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26m8j\" (UniqueName: \"kubernetes.io/projected/042d4af3-fd72-450a-a2e8-e296886b495a-kube-api-access-26m8j\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.804817 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:28:28 crc kubenswrapper[4820]: I0221 08:28:28.140484 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:28 crc kubenswrapper[4820]: I0221 08:28:28.470564 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:28:29 crc kubenswrapper[4820]: I0221 08:28:29.066701 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerStarted","Data":"237af00766cb3ac668153a70322a571c05a31fa10748184013c7aedd5f203ded"} Feb 21 08:28:29 crc kubenswrapper[4820]: I0221 08:28:29.323061 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwx2p"] Feb 21 08:28:30 crc kubenswrapper[4820]: I0221 08:28:30.078475 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zwx2p" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="registry-server" containerID="cri-o://95d16e2b913fed2d61bc0f263eb8f38889071e4b9bdbb9c548ac13518ff990f9" gracePeriod=2 Feb 21 08:28:31 crc kubenswrapper[4820]: I0221 08:28:31.092096 4820 generic.go:334] "Generic (PLEG): container finished" podID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerID="95d16e2b913fed2d61bc0f263eb8f38889071e4b9bdbb9c548ac13518ff990f9" exitCode=0 Feb 21 08:28:31 crc kubenswrapper[4820]: I0221 08:28:31.092199 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwx2p" event={"ID":"de6c7e38-55e9-4696-9f00-f7774a9e1410","Type":"ContainerDied","Data":"95d16e2b913fed2d61bc0f263eb8f38889071e4b9bdbb9c548ac13518ff990f9"} Feb 21 08:28:32 crc kubenswrapper[4820]: I0221 08:28:32.777052 4820 scope.go:117] "RemoveContainer" containerID="d615c6eccf115a17b159e3c5aa929268d96702d9d0293e623715649c3ad02f08" Feb 21 08:28:33 crc kubenswrapper[4820]: I0221 08:28:33.757651 4820 scope.go:117] "RemoveContainer" containerID="3e52a366c388477e04648e39ebed9de97e6f940db275bcc2bd5bce85d17a210e" Feb 21 08:28:33 crc kubenswrapper[4820]: I0221 08:28:33.916916 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.066952 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkrlt\" (UniqueName: \"kubernetes.io/projected/de6c7e38-55e9-4696-9f00-f7774a9e1410-kube-api-access-bkrlt\") pod \"de6c7e38-55e9-4696-9f00-f7774a9e1410\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.067272 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-utilities\") pod \"de6c7e38-55e9-4696-9f00-f7774a9e1410\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.067307 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-catalog-content\") pod \"de6c7e38-55e9-4696-9f00-f7774a9e1410\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.067996 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-utilities" (OuterVolumeSpecName: "utilities") pod "de6c7e38-55e9-4696-9f00-f7774a9e1410" (UID: "de6c7e38-55e9-4696-9f00-f7774a9e1410"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.072699 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de6c7e38-55e9-4696-9f00-f7774a9e1410-kube-api-access-bkrlt" (OuterVolumeSpecName: "kube-api-access-bkrlt") pod "de6c7e38-55e9-4696-9f00-f7774a9e1410" (UID: "de6c7e38-55e9-4696-9f00-f7774a9e1410"). InnerVolumeSpecName "kube-api-access-bkrlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.083818 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de6c7e38-55e9-4696-9f00-f7774a9e1410" (UID: "de6c7e38-55e9-4696-9f00-f7774a9e1410"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.127027 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwx2p" event={"ID":"de6c7e38-55e9-4696-9f00-f7774a9e1410","Type":"ContainerDied","Data":"8ca2ef79583171317b60a42c2026737ba15070b89b3825567b51aa143c5c24b8"} Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.127125 4820 scope.go:117] "RemoveContainer" containerID="95d16e2b913fed2d61bc0f263eb8f38889071e4b9bdbb9c548ac13518ff990f9" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.127133 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.169800 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.169841 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.169856 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkrlt\" (UniqueName: \"kubernetes.io/projected/de6c7e38-55e9-4696-9f00-f7774a9e1410-kube-api-access-bkrlt\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.178742 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwx2p"] Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.190667 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwx2p"] Feb 21 08:28:35 crc kubenswrapper[4820]: I0221 08:28:35.055880 4820 scope.go:117] "RemoveContainer" containerID="defbd84a660f2b863ed690d803b797edc42cfd55fb3311d91751cdbcc2ec68b0" Feb 21 08:28:35 crc kubenswrapper[4820]: I0221 08:28:35.079211 4820 scope.go:117] "RemoveContainer" containerID="6e177d00a243df83ea338ac14dee0c063928e5ffb6918fbecb57fd977a88e413" Feb 21 08:28:35 crc kubenswrapper[4820]: I0221 08:28:35.198025 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:35 crc kubenswrapper[4820]: I0221 08:28:35.207509 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:35 crc kubenswrapper[4820]: I0221 08:28:35.509698 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-px47t" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="registry-server" probeResult="failure" output=< Feb 21 08:28:35 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:28:35 crc kubenswrapper[4820]: > Feb 21 08:28:35 crc kubenswrapper[4820]: I0221 08:28:35.700477 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:28:35 crc kubenswrapper[4820]: E0221 08:28:35.700696 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:28:35 crc kubenswrapper[4820]: I0221 08:28:35.713280 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" path="/var/lib/kubelet/pods/de6c7e38-55e9-4696-9f00-f7774a9e1410/volumes" Feb 21 08:28:36 crc kubenswrapper[4820]: I0221 08:28:36.162329 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerStarted","Data":"137708a2f124c1e1d52df3c243cda8bcc10f1d8f0867fea3c2c60d674a9293be"} Feb 21 08:28:36 crc kubenswrapper[4820]: I0221 08:28:36.163217 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:36 crc kubenswrapper[4820]: I0221 08:28:36.363289 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:36 crc kubenswrapper[4820]: I0221 08:28:36.424785 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.157290 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ffmqr"] Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.671447 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.672860 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" containerName="openstackclient" containerID="cri-o://7771891f2fa08b33757d032c137a833eef19f7cdb411370f65f1082b88750265" gracePeriod=2 Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.682861 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.713414 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: E0221 08:28:37.713785 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="extract-content" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.713807 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="extract-content" Feb 21 08:28:37 crc kubenswrapper[4820]: E0221 08:28:37.713824 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="extract-utilities" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.713833 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="extract-utilities" Feb 21 08:28:37 crc kubenswrapper[4820]: E0221 08:28:37.713848 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" containerName="openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.713855 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" containerName="openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: E0221 08:28:37.713866 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="registry-server" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.713873 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="registry-server" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.714134 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="registry-server" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.714161 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" containerName="openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.715001 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.718743 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.742577 4820 status_manager.go:875] "Failed to update status for pod" pod="openstack/openstackclient" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7bff1f2-af0c-49de-981a-66f57457cc6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T08:28:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T08:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T08:28:37Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T08:28:37Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.rdoproject.org/podified-antelope-centos9/openstack-openstackclient:8419493e1fd846703d277695e03fc5eb\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"openstackclient\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/clouds.yaml\\\",\\\"name\\\":\\\"openstack-config\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/secure.yaml\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/cloudrc\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\\\",\\\"name\\\":\\\"combined-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T08:28:37Z\\\"}}\" for pod \"openstack\"/\"openstackclient\": pods \"openstackclient\" not found" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.756364 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: E0221 08:28:37.757618 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-6fzqc openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[combined-ca-bundle kube-api-access-6fzqc openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="b7bff1f2-af0c-49de-981a-66f57457cc6d" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.778270 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.788724 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.790337 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.798318 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b7bff1f2-af0c-49de-981a-66f57457cc6d" podUID="c888e608-8215-44cd-a30b-43b1c34b5685" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.799553 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.851487 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c888e608-8215-44cd-a30b-43b1c34b5685-openstack-config-secret\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.852033 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c888e608-8215-44cd-a30b-43b1c34b5685-openstack-config\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.852140 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfxjn\" (UniqueName: \"kubernetes.io/projected/c888e608-8215-44cd-a30b-43b1c34b5685-kube-api-access-hfxjn\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.852360 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c888e608-8215-44cd-a30b-43b1c34b5685-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.954404 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c888e608-8215-44cd-a30b-43b1c34b5685-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.954585 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c888e608-8215-44cd-a30b-43b1c34b5685-openstack-config-secret\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.954633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c888e608-8215-44cd-a30b-43b1c34b5685-openstack-config\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.954652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfxjn\" (UniqueName: \"kubernetes.io/projected/c888e608-8215-44cd-a30b-43b1c34b5685-kube-api-access-hfxjn\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.955865 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c888e608-8215-44cd-a30b-43b1c34b5685-openstack-config\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.960855 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c888e608-8215-44cd-a30b-43b1c34b5685-openstack-config-secret\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.969902 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c888e608-8215-44cd-a30b-43b1c34b5685-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.973064 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfxjn\" (UniqueName: \"kubernetes.io/projected/c888e608-8215-44cd-a30b-43b1c34b5685-kube-api-access-hfxjn\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:38 crc kubenswrapper[4820]: I0221 08:28:38.112649 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:38 crc kubenswrapper[4820]: I0221 08:28:38.186619 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:38 crc kubenswrapper[4820]: I0221 08:28:38.186609 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerStarted","Data":"5fbdf4c2b857c36d154c8be63ea4d0db344d745400f1a3617f7fffb564dcdb5e"} Feb 21 08:28:38 crc kubenswrapper[4820]: I0221 08:28:38.186770 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ffmqr" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="registry-server" containerID="cri-o://3bd5e985ee9159378765b592858683ff7721fb499b1997bc7c81039a1009a79d" gracePeriod=2 Feb 21 08:28:38 crc kubenswrapper[4820]: I0221 08:28:38.194714 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b7bff1f2-af0c-49de-981a-66f57457cc6d" podUID="c888e608-8215-44cd-a30b-43b1c34b5685" Feb 21 08:28:38 crc kubenswrapper[4820]: I0221 08:28:38.387151 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:38 crc kubenswrapper[4820]: I0221 08:28:38.392660 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b7bff1f2-af0c-49de-981a-66f57457cc6d" podUID="c888e608-8215-44cd-a30b-43b1c34b5685" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:38.709863 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.204203 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c888e608-8215-44cd-a30b-43b1c34b5685","Type":"ContainerStarted","Data":"ce8979afea330d028535b67341431f21a95db92497e86ee4513ede37f2783e32"} Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.204564 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c888e608-8215-44cd-a30b-43b1c34b5685","Type":"ContainerStarted","Data":"7f1066cbe3db3a9cd4529c4a16baad5d1e0fb040be463ddd6709f5b9deed627e"} Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.212508 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerStarted","Data":"dc63bc01e75861e72cb0c1d7c880c6b18870394641adb9e882bcc3de7204be7f"} Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.224310 4820 generic.go:334] "Generic (PLEG): container finished" podID="7a562943-0b50-4684-bfae-b185088ff6ba" containerID="3bd5e985ee9159378765b592858683ff7721fb499b1997bc7c81039a1009a79d" exitCode=0 Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.224416 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.224872 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffmqr" event={"ID":"7a562943-0b50-4684-bfae-b185088ff6ba","Type":"ContainerDied","Data":"3bd5e985ee9159378765b592858683ff7721fb499b1997bc7c81039a1009a79d"} Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.233271 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b7bff1f2-af0c-49de-981a-66f57457cc6d" podUID="c888e608-8215-44cd-a30b-43b1c34b5685" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.256389 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b7bff1f2-af0c-49de-981a-66f57457cc6d" podUID="c888e608-8215-44cd-a30b-43b1c34b5685" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.259195 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.259171905 podStartE2EDuration="2.259171905s" podCreationTimestamp="2026-02-21 08:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:28:39.227559382 +0000 UTC m=+6094.260643580" watchObservedRunningTime="2026-02-21 08:28:39.259171905 +0000 UTC m=+6094.292256113" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.447188 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.447557 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="prometheus" containerID="cri-o://da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10" gracePeriod=600 Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.447653 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="config-reloader" containerID="cri-o://dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15" gracePeriod=600 Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.447809 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="thanos-sidecar" containerID="cri-o://8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9" gracePeriod=600 Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.504428 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.596341 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-utilities\") pod \"7a562943-0b50-4684-bfae-b185088ff6ba\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.597967 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-utilities" (OuterVolumeSpecName: "utilities") pod "7a562943-0b50-4684-bfae-b185088ff6ba" (UID: "7a562943-0b50-4684-bfae-b185088ff6ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.598152 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8qb4\" (UniqueName: \"kubernetes.io/projected/7a562943-0b50-4684-bfae-b185088ff6ba-kube-api-access-c8qb4\") pod \"7a562943-0b50-4684-bfae-b185088ff6ba\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.598215 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-catalog-content\") pod \"7a562943-0b50-4684-bfae-b185088ff6ba\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.599078 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.615478 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a562943-0b50-4684-bfae-b185088ff6ba-kube-api-access-c8qb4" (OuterVolumeSpecName: "kube-api-access-c8qb4") pod "7a562943-0b50-4684-bfae-b185088ff6ba" (UID: "7a562943-0b50-4684-bfae-b185088ff6ba"). InnerVolumeSpecName "kube-api-access-c8qb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.704875 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8qb4\" (UniqueName: \"kubernetes.io/projected/7a562943-0b50-4684-bfae-b185088ff6ba-kube-api-access-c8qb4\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.706978 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a562943-0b50-4684-bfae-b185088ff6ba" (UID: "7a562943-0b50-4684-bfae-b185088ff6ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.714351 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7bff1f2-af0c-49de-981a-66f57457cc6d" path="/var/lib/kubelet/pods/b7bff1f2-af0c-49de-981a-66f57457cc6d/volumes" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.807205 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.203394 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.240153 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffmqr" event={"ID":"7a562943-0b50-4684-bfae-b185088ff6ba","Type":"ContainerDied","Data":"c6420d40aa16470899c936d5d897eeac108ecf0214d52d9f25b52f68283a6ec3"} Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.240210 4820 scope.go:117] "RemoveContainer" containerID="3bd5e985ee9159378765b592858683ff7721fb499b1997bc7c81039a1009a79d" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.240211 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.242945 4820 generic.go:334] "Generic (PLEG): container finished" podID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerID="8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9" exitCode=0 Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.242979 4820 generic.go:334] "Generic (PLEG): container finished" podID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerID="dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15" exitCode=0 Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.242990 4820 generic.go:334] "Generic (PLEG): container finished" podID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerID="da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10" exitCode=0 Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.243097 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.243524 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerDied","Data":"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9"} Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.243564 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerDied","Data":"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15"} Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.243579 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerDied","Data":"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10"} Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.243591 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerDied","Data":"8024b89ef022e9ea6ebea26e2dc95ed3b4eeb5984a56c0e3f69c3d705d4bb5c2"} Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.257705 4820 generic.go:334] "Generic (PLEG): container finished" podID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" containerID="7771891f2fa08b33757d032c137a833eef19f7cdb411370f65f1082b88750265" exitCode=137 Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.275966 4820 scope.go:117] "RemoveContainer" containerID="d9081c463552fa518c5d538e73ab78a2558358a5c0bb2c4b0449fe1106f5b8ef" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.325035 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-2\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.325141 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhfxn\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-kube-api-access-bhfxn\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.325291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.325825 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.325869 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-web-config\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.325928 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-1\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.325966 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config-out\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.326007 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.326031 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-0\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.326085 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-thanos-prometheus-http-client-file\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.326331 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-tls-assets\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.326381 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.326483 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ffmqr"] Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.327216 4820 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.327267 4820 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.327906 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.330065 4820 scope.go:117] "RemoveContainer" containerID="85adee0a70bdaf76950c48c8d16085221f5d902409bb24b4d68c953cf5c5a182" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.341286 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config" (OuterVolumeSpecName: "config") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.341413 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-kube-api-access-bhfxn" (OuterVolumeSpecName: "kube-api-access-bhfxn") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "kube-api-access-bhfxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.345121 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.345264 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.353598 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config-out" (OuterVolumeSpecName: "config-out") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.375800 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ffmqr"] Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.381744 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.393083 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-web-config" (OuterVolumeSpecName: "web-config") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430340 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") on node \"crc\" " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430405 4820 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-web-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430422 4820 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config-out\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430433 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430448 4820 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430485 4820 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430499 4820 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430511 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhfxn\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-kube-api-access-bhfxn\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.477193 4820 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.477382 4820 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8") on node "crc" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.484098 4820 scope.go:117] "RemoveContainer" containerID="8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.542960 4820 reconciler_common.go:293] "Volume detached for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.546023 4820 scope.go:117] "RemoveContainer" containerID="dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.576708 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.598798 4820 scope.go:117] "RemoveContainer" containerID="da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.615252 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.638605 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650000 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.650551 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="registry-server" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650573 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="registry-server" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.650589 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="extract-content" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650599 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="extract-content" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.650611 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="thanos-sidecar" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650617 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="thanos-sidecar" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.650627 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="init-config-reloader" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650635 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="init-config-reloader" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.650652 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="extract-utilities" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650659 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="extract-utilities" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.650672 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="config-reloader" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650677 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="config-reloader" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.650689 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="prometheus" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650696 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="prometheus" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650860 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="config-reloader" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650874 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="registry-server" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650887 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="prometheus" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650902 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="thanos-sidecar" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.654702 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.657831 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.659313 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.660570 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.662064 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.662190 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.662292 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.665391 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.667632 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zxfvn" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.667840 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.679591 4820 scope.go:117] "RemoveContainer" containerID="6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.680299 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.719897 4820 scope.go:117] "RemoveContainer" containerID="8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.720259 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9\": container with ID starting with 8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9 not found: ID does not exist" containerID="8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.720373 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9"} err="failed to get container status \"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9\": rpc error: code = NotFound desc = could not find container \"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9\": container with ID starting with 8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.720451 4820 scope.go:117] "RemoveContainer" containerID="dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.720760 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15\": container with ID starting with dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15 not found: ID does not exist" containerID="dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.720789 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15"} err="failed to get container status \"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15\": rpc error: code = NotFound desc = could not find container \"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15\": container with ID starting with dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.720811 4820 scope.go:117] "RemoveContainer" containerID="da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.721055 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10\": container with ID starting with da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10 not found: ID does not exist" containerID="da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.721147 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10"} err="failed to get container status \"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10\": rpc error: code = NotFound desc = could not find container \"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10\": container with ID starting with da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.721206 4820 scope.go:117] "RemoveContainer" containerID="6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.721803 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294\": container with ID starting with 6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294 not found: ID does not exist" containerID="6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.721835 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294"} err="failed to get container status \"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294\": rpc error: code = NotFound desc = could not find container \"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294\": container with ID starting with 6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.721853 4820 scope.go:117] "RemoveContainer" containerID="8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.722342 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9"} err="failed to get container status \"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9\": rpc error: code = NotFound desc = could not find container \"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9\": container with ID starting with 8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.722372 4820 scope.go:117] "RemoveContainer" containerID="dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.722636 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15"} err="failed to get container status \"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15\": rpc error: code = NotFound desc = could not find container \"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15\": container with ID starting with dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.722661 4820 scope.go:117] "RemoveContainer" containerID="da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.722964 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10"} err="failed to get container status \"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10\": rpc error: code = NotFound desc = could not find container \"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10\": container with ID starting with da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.723023 4820 scope.go:117] "RemoveContainer" containerID="6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.723627 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294"} err="failed to get container status \"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294\": rpc error: code = NotFound desc = could not find container \"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294\": container with ID starting with 6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.723655 4820 scope.go:117] "RemoveContainer" containerID="8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.723930 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9"} err="failed to get container status \"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9\": rpc error: code = NotFound desc = could not find container \"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9\": container with ID starting with 8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.723950 4820 scope.go:117] "RemoveContainer" containerID="dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.724186 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15"} err="failed to get container status \"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15\": rpc error: code = NotFound desc = could not find container \"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15\": container with ID starting with dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.724209 4820 scope.go:117] "RemoveContainer" containerID="da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.724438 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10"} err="failed to get container status \"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10\": rpc error: code = NotFound desc = could not find container \"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10\": container with ID starting with da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.724462 4820 scope.go:117] "RemoveContainer" containerID="6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.724833 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294"} err="failed to get container status \"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294\": rpc error: code = NotFound desc = could not find container \"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294\": container with ID starting with 6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.746988 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjp8t\" (UniqueName: \"kubernetes.io/projected/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-kube-api-access-fjp8t\") pod \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747168 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-combined-ca-bundle\") pod \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747202 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config-secret\") pod \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747263 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config\") pod \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747591 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-config\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747678 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747724 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0c81808a-06e3-4353-b7a6-56ff53f15b69-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747747 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-299hp\" (UniqueName: \"kubernetes.io/projected/0c81808a-06e3-4353-b7a6-56ff53f15b69-kube-api-access-299hp\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747800 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747901 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747935 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0c81808a-06e3-4353-b7a6-56ff53f15b69-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.748010 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.748055 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.748180 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.748230 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.748276 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.748313 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.750997 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-kube-api-access-fjp8t" (OuterVolumeSpecName: "kube-api-access-fjp8t") pod "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" (UID: "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609"). InnerVolumeSpecName "kube-api-access-fjp8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.782419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" (UID: "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.784993 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" (UID: "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.806722 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" (UID: "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.851979 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852056 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852149 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852176 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852216 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852278 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-config\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852351 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852375 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0c81808a-06e3-4353-b7a6-56ff53f15b69-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852398 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-299hp\" (UniqueName: \"kubernetes.io/projected/0c81808a-06e3-4353-b7a6-56ff53f15b69-kube-api-access-299hp\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852444 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852523 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852550 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0c81808a-06e3-4353-b7a6-56ff53f15b69-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852696 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852722 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852740 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852755 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjp8t\" (UniqueName: \"kubernetes.io/projected/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-kube-api-access-fjp8t\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.854275 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.855281 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.857166 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.857228 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.859956 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0c81808a-06e3-4353-b7a6-56ff53f15b69-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.861929 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.862766 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.862793 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cd8dd8437aa8075cd51ba65607a645fc10f7b325eb32cc6b53f399eac5c08fb8/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.863905 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.864854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.865185 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.866087 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-config\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.866618 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0c81808a-06e3-4353-b7a6-56ff53f15b69-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.878584 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-299hp\" (UniqueName: \"kubernetes.io/projected/0c81808a-06e3-4353-b7a6-56ff53f15b69-kube-api-access-299hp\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.911014 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.974125 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.273343 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerStarted","Data":"5d40695bfcfa3209edf5615a34cf423258d7b6777c0a391627147abfb464e973"} Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.281764 4820 scope.go:117] "RemoveContainer" containerID="7771891f2fa08b33757d032c137a833eef19f7cdb411370f65f1082b88750265" Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.281784 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.321162 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" podUID="c888e608-8215-44cd-a30b-43b1c34b5685" Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.323783 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.910355341 podStartE2EDuration="14.323757128s" podCreationTimestamp="2026-02-21 08:28:27 +0000 UTC" firstStartedPulling="2026-02-21 08:28:28.476048922 +0000 UTC m=+6083.509133120" lastFinishedPulling="2026-02-21 08:28:39.889450709 +0000 UTC m=+6094.922534907" observedRunningTime="2026-02-21 08:28:41.302849315 +0000 UTC m=+6096.335933523" watchObservedRunningTime="2026-02-21 08:28:41.323757128 +0000 UTC m=+6096.356841326" Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.499143 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.708184 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" path="/var/lib/kubelet/pods/7a562943-0b50-4684-bfae-b185088ff6ba/volumes" Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.709706 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" path="/var/lib/kubelet/pods/7d23e6c5-673e-4d64-a39e-35e3b09d8d53/volumes" Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.710991 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" path="/var/lib/kubelet/pods/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609/volumes" Feb 21 08:28:42 crc kubenswrapper[4820]: I0221 08:28:42.292778 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0c81808a-06e3-4353-b7a6-56ff53f15b69","Type":"ContainerStarted","Data":"af923841822f57e1220072b03d9d12b984116351ec5d1c5c174e67e0eae729bb"} Feb 21 08:28:42 crc kubenswrapper[4820]: I0221 08:28:42.294575 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 08:28:43 crc kubenswrapper[4820]: I0221 08:28:43.198057 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.128:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.844036 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-rrxv7"] Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.846309 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.865854 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-rrxv7"] Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.878837 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-1ff2-account-create-update-lcrwl"] Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.881715 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.884769 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.937556 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1ff2-account-create-update-lcrwl"] Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.940482 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-operator-scripts\") pod \"aodh-db-create-rrxv7\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.940557 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gnt8\" (UniqueName: \"kubernetes.io/projected/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-kube-api-access-7gnt8\") pod \"aodh-db-create-rrxv7\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.043546 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-operator-scripts\") pod \"aodh-db-create-rrxv7\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.044129 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gnt8\" (UniqueName: \"kubernetes.io/projected/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-kube-api-access-7gnt8\") pod \"aodh-db-create-rrxv7\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.044281 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e9afae-f779-41ff-af87-712577c90f88-operator-scripts\") pod \"aodh-1ff2-account-create-update-lcrwl\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.044510 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcpgf\" (UniqueName: \"kubernetes.io/projected/c1e9afae-f779-41ff-af87-712577c90f88-kube-api-access-hcpgf\") pod \"aodh-1ff2-account-create-update-lcrwl\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.044602 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-operator-scripts\") pod \"aodh-db-create-rrxv7\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.068620 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gnt8\" (UniqueName: \"kubernetes.io/projected/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-kube-api-access-7gnt8\") pod \"aodh-db-create-rrxv7\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.147074 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e9afae-f779-41ff-af87-712577c90f88-operator-scripts\") pod \"aodh-1ff2-account-create-update-lcrwl\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.147171 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcpgf\" (UniqueName: \"kubernetes.io/projected/c1e9afae-f779-41ff-af87-712577c90f88-kube-api-access-hcpgf\") pod \"aodh-1ff2-account-create-update-lcrwl\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.148110 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e9afae-f779-41ff-af87-712577c90f88-operator-scripts\") pod \"aodh-1ff2-account-create-update-lcrwl\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.168254 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcpgf\" (UniqueName: \"kubernetes.io/projected/c1e9afae-f779-41ff-af87-712577c90f88-kube-api-access-hcpgf\") pod \"aodh-1ff2-account-create-update-lcrwl\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.169226 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.221036 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.329908 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0c81808a-06e3-4353-b7a6-56ff53f15b69","Type":"ContainerStarted","Data":"abe353e6b93e5f762f3bb39ba6e38f0bfa8c49efdf9d9452728ea5771d41ac62"} Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.543399 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-px47t" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="registry-server" probeResult="failure" output=< Feb 21 08:28:45 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:28:45 crc kubenswrapper[4820]: > Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.892195 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-rrxv7"] Feb 21 08:28:45 crc kubenswrapper[4820]: W0221 08:28:45.906869 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b874f59_5a8f_4ecc_8405_4993b1fe7fc2.slice/crio-f35881dbf86f6c61a39bcb43fe7cd476679de5f4c62e4558af3fa30d99664b7b WatchSource:0}: Error finding container f35881dbf86f6c61a39bcb43fe7cd476679de5f4c62e4558af3fa30d99664b7b: Status 404 returned error can't find the container with id f35881dbf86f6c61a39bcb43fe7cd476679de5f4c62e4558af3fa30d99664b7b Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.988079 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1ff2-account-create-update-lcrwl"] Feb 21 08:28:45 crc kubenswrapper[4820]: W0221 08:28:45.994762 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1e9afae_f779_41ff_af87_712577c90f88.slice/crio-7a919ff4e2f49f9ea59e415849f601212c2fa27e24b2837267736c1dfe9539b0 WatchSource:0}: Error finding container 7a919ff4e2f49f9ea59e415849f601212c2fa27e24b2837267736c1dfe9539b0: Status 404 returned error can't find the container with id 7a919ff4e2f49f9ea59e415849f601212c2fa27e24b2837267736c1dfe9539b0 Feb 21 08:28:46 crc kubenswrapper[4820]: I0221 08:28:46.338904 4820 generic.go:334] "Generic (PLEG): container finished" podID="0b874f59-5a8f-4ecc-8405-4993b1fe7fc2" containerID="0fa05988329236af07673909477dc89b9d1d1084c3a32b7028ed0991a796e02a" exitCode=0 Feb 21 08:28:46 crc kubenswrapper[4820]: I0221 08:28:46.338972 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-rrxv7" event={"ID":"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2","Type":"ContainerDied","Data":"0fa05988329236af07673909477dc89b9d1d1084c3a32b7028ed0991a796e02a"} Feb 21 08:28:46 crc kubenswrapper[4820]: I0221 08:28:46.339003 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-rrxv7" event={"ID":"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2","Type":"ContainerStarted","Data":"f35881dbf86f6c61a39bcb43fe7cd476679de5f4c62e4558af3fa30d99664b7b"} Feb 21 08:28:46 crc kubenswrapper[4820]: I0221 08:28:46.342219 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1ff2-account-create-update-lcrwl" event={"ID":"c1e9afae-f779-41ff-af87-712577c90f88","Type":"ContainerStarted","Data":"283eeb9dc122d4cc0bc63ade7d171e6d57a57e8406097e757d3cb60f5fa2fcfe"} Feb 21 08:28:46 crc kubenswrapper[4820]: I0221 08:28:46.342275 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1ff2-account-create-update-lcrwl" event={"ID":"c1e9afae-f779-41ff-af87-712577c90f88","Type":"ContainerStarted","Data":"7a919ff4e2f49f9ea59e415849f601212c2fa27e24b2837267736c1dfe9539b0"} Feb 21 08:28:46 crc kubenswrapper[4820]: I0221 08:28:46.375519 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-1ff2-account-create-update-lcrwl" podStartSLOduration=2.375498467 podStartE2EDuration="2.375498467s" podCreationTimestamp="2026-02-21 08:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:28:46.369132056 +0000 UTC m=+6101.402216254" watchObservedRunningTime="2026-02-21 08:28:46.375498467 +0000 UTC m=+6101.408582675" Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.350547 4820 generic.go:334] "Generic (PLEG): container finished" podID="c1e9afae-f779-41ff-af87-712577c90f88" containerID="283eeb9dc122d4cc0bc63ade7d171e6d57a57e8406097e757d3cb60f5fa2fcfe" exitCode=0 Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.350632 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1ff2-account-create-update-lcrwl" event={"ID":"c1e9afae-f779-41ff-af87-712577c90f88","Type":"ContainerDied","Data":"283eeb9dc122d4cc0bc63ade7d171e6d57a57e8406097e757d3cb60f5fa2fcfe"} Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.714030 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.822833 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-operator-scripts\") pod \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.822984 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gnt8\" (UniqueName: \"kubernetes.io/projected/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-kube-api-access-7gnt8\") pod \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.823575 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b874f59-5a8f-4ecc-8405-4993b1fe7fc2" (UID: "0b874f59-5a8f-4ecc-8405-4993b1fe7fc2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.824993 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.829403 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-kube-api-access-7gnt8" (OuterVolumeSpecName: "kube-api-access-7gnt8") pod "0b874f59-5a8f-4ecc-8405-4993b1fe7fc2" (UID: "0b874f59-5a8f-4ecc-8405-4993b1fe7fc2"). InnerVolumeSpecName "kube-api-access-7gnt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.927222 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gnt8\" (UniqueName: \"kubernetes.io/projected/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-kube-api-access-7gnt8\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.360204 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-rrxv7" event={"ID":"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2","Type":"ContainerDied","Data":"f35881dbf86f6c61a39bcb43fe7cd476679de5f4c62e4558af3fa30d99664b7b"} Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.360586 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f35881dbf86f6c61a39bcb43fe7cd476679de5f4c62e4558af3fa30d99664b7b" Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.360331 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.717824 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.847396 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e9afae-f779-41ff-af87-712577c90f88-operator-scripts\") pod \"c1e9afae-f779-41ff-af87-712577c90f88\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.847704 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcpgf\" (UniqueName: \"kubernetes.io/projected/c1e9afae-f779-41ff-af87-712577c90f88-kube-api-access-hcpgf\") pod \"c1e9afae-f779-41ff-af87-712577c90f88\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.848156 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1e9afae-f779-41ff-af87-712577c90f88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1e9afae-f779-41ff-af87-712577c90f88" (UID: "c1e9afae-f779-41ff-af87-712577c90f88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.849634 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e9afae-f779-41ff-af87-712577c90f88-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.853131 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e9afae-f779-41ff-af87-712577c90f88-kube-api-access-hcpgf" (OuterVolumeSpecName: "kube-api-access-hcpgf") pod "c1e9afae-f779-41ff-af87-712577c90f88" (UID: "c1e9afae-f779-41ff-af87-712577c90f88"). InnerVolumeSpecName "kube-api-access-hcpgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.951728 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcpgf\" (UniqueName: \"kubernetes.io/projected/c1e9afae-f779-41ff-af87-712577c90f88-kube-api-access-hcpgf\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:49 crc kubenswrapper[4820]: I0221 08:28:49.370011 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1ff2-account-create-update-lcrwl" event={"ID":"c1e9afae-f779-41ff-af87-712577c90f88","Type":"ContainerDied","Data":"7a919ff4e2f49f9ea59e415849f601212c2fa27e24b2837267736c1dfe9539b0"} Feb 21 08:28:49 crc kubenswrapper[4820]: I0221 08:28:49.370047 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a919ff4e2f49f9ea59e415849f601212c2fa27e24b2837267736c1dfe9539b0" Feb 21 08:28:49 crc kubenswrapper[4820]: I0221 08:28:49.370096 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:49 crc kubenswrapper[4820]: I0221 08:28:49.696768 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:28:49 crc kubenswrapper[4820]: E0221 08:28:49.697088 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.048476 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-l4nch"] Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.063028 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-l4nch"] Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.077136 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-qk6xf"] Feb 21 08:28:50 crc kubenswrapper[4820]: E0221 08:28:50.077823 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b874f59-5a8f-4ecc-8405-4993b1fe7fc2" containerName="mariadb-database-create" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.077940 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b874f59-5a8f-4ecc-8405-4993b1fe7fc2" containerName="mariadb-database-create" Feb 21 08:28:50 crc kubenswrapper[4820]: E0221 08:28:50.078041 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e9afae-f779-41ff-af87-712577c90f88" containerName="mariadb-account-create-update" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.078103 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e9afae-f779-41ff-af87-712577c90f88" containerName="mariadb-account-create-update" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.078382 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b874f59-5a8f-4ecc-8405-4993b1fe7fc2" containerName="mariadb-database-create" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.078453 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e9afae-f779-41ff-af87-712577c90f88" containerName="mariadb-account-create-update" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.079331 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.082425 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.082436 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.084922 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.085046 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-57b2p" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.091719 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-qk6xf"] Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.178010 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-combined-ca-bundle\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.178095 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-config-data\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.178127 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jk6n\" (UniqueName: \"kubernetes.io/projected/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-kube-api-access-2jk6n\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.178353 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-scripts\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.280334 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-scripts\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.280406 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-combined-ca-bundle\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.280483 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-config-data\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.281347 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jk6n\" (UniqueName: \"kubernetes.io/projected/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-kube-api-access-2jk6n\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.287908 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-config-data\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.288929 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-scripts\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.301372 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-combined-ca-bundle\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.324451 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jk6n\" (UniqueName: \"kubernetes.io/projected/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-kube-api-access-2jk6n\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.399570 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: W0221 08:28:50.895110 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbe35ddb_c3e7_4233_96a1_fe0df9e13f6a.slice/crio-e81b9ea598682aec6afdba616cb1ce1a5280dd01172f6b453d9afb8515a2a252 WatchSource:0}: Error finding container e81b9ea598682aec6afdba616cb1ce1a5280dd01172f6b453d9afb8515a2a252: Status 404 returned error can't find the container with id e81b9ea598682aec6afdba616cb1ce1a5280dd01172f6b453d9afb8515a2a252 Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.896599 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-qk6xf"] Feb 21 08:28:51 crc kubenswrapper[4820]: I0221 08:28:51.390896 4820 generic.go:334] "Generic (PLEG): container finished" podID="0c81808a-06e3-4353-b7a6-56ff53f15b69" containerID="abe353e6b93e5f762f3bb39ba6e38f0bfa8c49efdf9d9452728ea5771d41ac62" exitCode=0 Feb 21 08:28:51 crc kubenswrapper[4820]: I0221 08:28:51.390973 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0c81808a-06e3-4353-b7a6-56ff53f15b69","Type":"ContainerDied","Data":"abe353e6b93e5f762f3bb39ba6e38f0bfa8c49efdf9d9452728ea5771d41ac62"} Feb 21 08:28:51 crc kubenswrapper[4820]: I0221 08:28:51.392722 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qk6xf" event={"ID":"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a","Type":"ContainerStarted","Data":"e81b9ea598682aec6afdba616cb1ce1a5280dd01172f6b453d9afb8515a2a252"} Feb 21 08:28:51 crc kubenswrapper[4820]: I0221 08:28:51.723751 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9668bc3-af3a-43af-8ead-9cc596776786" path="/var/lib/kubelet/pods/f9668bc3-af3a-43af-8ead-9cc596776786/volumes" Feb 21 08:28:52 crc kubenswrapper[4820]: I0221 08:28:52.403702 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0c81808a-06e3-4353-b7a6-56ff53f15b69","Type":"ContainerStarted","Data":"8e043988875a6b194029cb4d402f3b8a04c319fd2eefd167e63c2713966e2cf7"} Feb 21 08:28:54 crc kubenswrapper[4820]: I0221 08:28:54.734648 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:54 crc kubenswrapper[4820]: I0221 08:28:54.796774 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:54 crc kubenswrapper[4820]: I0221 08:28:54.979684 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-px47t"] Feb 21 08:28:55 crc kubenswrapper[4820]: I0221 08:28:55.431258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0c81808a-06e3-4353-b7a6-56ff53f15b69","Type":"ContainerStarted","Data":"38a53d9e5a8a8fb83fe0e1762d74b4301db431fb511089c340616c0bc3dfbb29"} Feb 21 08:28:55 crc kubenswrapper[4820]: I0221 08:28:55.431565 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0c81808a-06e3-4353-b7a6-56ff53f15b69","Type":"ContainerStarted","Data":"6f31fe0f4ae9aa9a25ae259c618f133d9f2b6e9150b3c2dd0259349701fa165d"} Feb 21 08:28:55 crc kubenswrapper[4820]: I0221 08:28:55.468882 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.468864425 podStartE2EDuration="15.468864425s" podCreationTimestamp="2026-02-21 08:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:28:55.459053351 +0000 UTC m=+6110.492137569" watchObservedRunningTime="2026-02-21 08:28:55.468864425 +0000 UTC m=+6110.501948623" Feb 21 08:28:55 crc kubenswrapper[4820]: I0221 08:28:55.975405 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:55 crc kubenswrapper[4820]: I0221 08:28:55.975464 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:55 crc kubenswrapper[4820]: I0221 08:28:55.982102 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:56 crc kubenswrapper[4820]: I0221 08:28:56.440277 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-px47t" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="registry-server" containerID="cri-o://b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6" gracePeriod=2 Feb 21 08:28:56 crc kubenswrapper[4820]: I0221 08:28:56.449039 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.037288 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.241022 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-catalog-content\") pod \"ff694654-0a77-4fcd-86a3-af752c869359\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.241416 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqgtb\" (UniqueName: \"kubernetes.io/projected/ff694654-0a77-4fcd-86a3-af752c869359-kube-api-access-tqgtb\") pod \"ff694654-0a77-4fcd-86a3-af752c869359\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.241531 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-utilities\") pod \"ff694654-0a77-4fcd-86a3-af752c869359\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.242339 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-utilities" (OuterVolumeSpecName: "utilities") pod "ff694654-0a77-4fcd-86a3-af752c869359" (UID: "ff694654-0a77-4fcd-86a3-af752c869359"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.246876 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff694654-0a77-4fcd-86a3-af752c869359-kube-api-access-tqgtb" (OuterVolumeSpecName: "kube-api-access-tqgtb") pod "ff694654-0a77-4fcd-86a3-af752c869359" (UID: "ff694654-0a77-4fcd-86a3-af752c869359"). InnerVolumeSpecName "kube-api-access-tqgtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.297035 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff694654-0a77-4fcd-86a3-af752c869359" (UID: "ff694654-0a77-4fcd-86a3-af752c869359"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.344715 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.344754 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.344770 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqgtb\" (UniqueName: \"kubernetes.io/projected/ff694654-0a77-4fcd-86a3-af752c869359-kube-api-access-tqgtb\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.453449 4820 generic.go:334] "Generic (PLEG): container finished" podID="ff694654-0a77-4fcd-86a3-af752c869359" containerID="b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6" exitCode=0 Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.453506 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.453605 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px47t" event={"ID":"ff694654-0a77-4fcd-86a3-af752c869359","Type":"ContainerDied","Data":"b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6"} Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.453723 4820 scope.go:117] "RemoveContainer" containerID="b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.454018 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px47t" event={"ID":"ff694654-0a77-4fcd-86a3-af752c869359","Type":"ContainerDied","Data":"3e61fae0439d5f606a5f026c6da465f0c1a96e5b3ce271ea0442957ff02140e2"} Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.495575 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-px47t"] Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.542586 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-px47t"] Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.708972 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff694654-0a77-4fcd-86a3-af752c869359" path="/var/lib/kubelet/pods/ff694654-0a77-4fcd-86a3-af752c869359/volumes" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.835202 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 21 08:28:59 crc kubenswrapper[4820]: I0221 08:28:59.854911 4820 scope.go:117] "RemoveContainer" containerID="acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457" Feb 21 08:28:59 crc kubenswrapper[4820]: I0221 08:28:59.880453 4820 scope.go:117] "RemoveContainer" containerID="dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58" Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.014946 4820 scope.go:117] "RemoveContainer" containerID="b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6" Feb 21 08:29:00 crc kubenswrapper[4820]: E0221 08:29:00.015557 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6\": container with ID starting with b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6 not found: ID does not exist" containerID="b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6" Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.015599 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6"} err="failed to get container status \"b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6\": rpc error: code = NotFound desc = could not find container \"b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6\": container with ID starting with b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6 not found: ID does not exist" Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.015627 4820 scope.go:117] "RemoveContainer" containerID="acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457" Feb 21 08:29:00 crc kubenswrapper[4820]: E0221 08:29:00.015943 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457\": container with ID starting with acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457 not found: ID does not exist" containerID="acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457" Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.015969 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457"} err="failed to get container status \"acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457\": rpc error: code = NotFound desc = could not find container \"acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457\": container with ID starting with acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457 not found: ID does not exist" Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.015987 4820 scope.go:117] "RemoveContainer" containerID="dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58" Feb 21 08:29:00 crc kubenswrapper[4820]: E0221 08:29:00.018017 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58\": container with ID starting with dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58 not found: ID does not exist" containerID="dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58" Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.018044 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58"} err="failed to get container status \"dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58\": rpc error: code = NotFound desc = could not find container \"dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58\": container with ID starting with dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58 not found: ID does not exist" Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.490341 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qk6xf" event={"ID":"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a","Type":"ContainerStarted","Data":"633afaacce752e65a5261410e5e1ea5326c34bca69f027a178d324465a8a3bac"} Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.513421 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-qk6xf" podStartSLOduration=1.393496027 podStartE2EDuration="10.51340105s" podCreationTimestamp="2026-02-21 08:28:50 +0000 UTC" firstStartedPulling="2026-02-21 08:28:50.897933393 +0000 UTC m=+6105.931017601" lastFinishedPulling="2026-02-21 08:29:00.017838426 +0000 UTC m=+6115.050922624" observedRunningTime="2026-02-21 08:29:00.509635738 +0000 UTC m=+6115.542719926" watchObservedRunningTime="2026-02-21 08:29:00.51340105 +0000 UTC m=+6115.546485248" Feb 21 08:29:01 crc kubenswrapper[4820]: I0221 08:29:01.522041 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:29:01 crc kubenswrapper[4820]: I0221 08:29:01.522373 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="153a0123-545b-4694-8e22-ef2a97ec9939" containerName="kube-state-metrics" containerID="cri-o://4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1" gracePeriod=30 Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.044279 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.171374 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xxlc\" (UniqueName: \"kubernetes.io/projected/153a0123-545b-4694-8e22-ef2a97ec9939-kube-api-access-5xxlc\") pod \"153a0123-545b-4694-8e22-ef2a97ec9939\" (UID: \"153a0123-545b-4694-8e22-ef2a97ec9939\") " Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.181959 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153a0123-545b-4694-8e22-ef2a97ec9939-kube-api-access-5xxlc" (OuterVolumeSpecName: "kube-api-access-5xxlc") pod "153a0123-545b-4694-8e22-ef2a97ec9939" (UID: "153a0123-545b-4694-8e22-ef2a97ec9939"). InnerVolumeSpecName "kube-api-access-5xxlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.273859 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xxlc\" (UniqueName: \"kubernetes.io/projected/153a0123-545b-4694-8e22-ef2a97ec9939-kube-api-access-5xxlc\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.509657 4820 generic.go:334] "Generic (PLEG): container finished" podID="153a0123-545b-4694-8e22-ef2a97ec9939" containerID="4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1" exitCode=2 Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.509955 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"153a0123-545b-4694-8e22-ef2a97ec9939","Type":"ContainerDied","Data":"4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1"} Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.509981 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"153a0123-545b-4694-8e22-ef2a97ec9939","Type":"ContainerDied","Data":"6f3a59fdd346b4bf2cd6317827d2bf8f9f715934794135b9913b0326998f7186"} Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.509998 4820 scope.go:117] "RemoveContainer" containerID="4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.510098 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.531968 4820 scope.go:117] "RemoveContainer" containerID="4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1" Feb 21 08:29:02 crc kubenswrapper[4820]: E0221 08:29:02.532482 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1\": container with ID starting with 4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1 not found: ID does not exist" containerID="4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.532525 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1"} err="failed to get container status \"4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1\": rpc error: code = NotFound desc = could not find container \"4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1\": container with ID starting with 4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1 not found: ID does not exist" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.542142 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.555247 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.570191 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:29:02 crc kubenswrapper[4820]: E0221 08:29:02.570854 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="registry-server" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.570882 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="registry-server" Feb 21 08:29:02 crc kubenswrapper[4820]: E0221 08:29:02.570907 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153a0123-545b-4694-8e22-ef2a97ec9939" containerName="kube-state-metrics" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.570915 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="153a0123-545b-4694-8e22-ef2a97ec9939" containerName="kube-state-metrics" Feb 21 08:29:02 crc kubenswrapper[4820]: E0221 08:29:02.570924 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="extract-content" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.570932 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="extract-content" Feb 21 08:29:02 crc kubenswrapper[4820]: E0221 08:29:02.570949 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="extract-utilities" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.570956 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="extract-utilities" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.571204 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="153a0123-545b-4694-8e22-ef2a97ec9939" containerName="kube-state-metrics" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.571223 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="registry-server" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.574675 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.579263 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.579566 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.618845 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.685865 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.685924 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-924jz\" (UniqueName: \"kubernetes.io/projected/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-api-access-924jz\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.686046 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.686107 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.787550 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.787732 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.787758 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-924jz\" (UniqueName: \"kubernetes.io/projected/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-api-access-924jz\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.787806 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.792722 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.794925 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.795009 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.806288 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-924jz\" (UniqueName: \"kubernetes.io/projected/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-api-access-924jz\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.899217 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 08:29:03 crc kubenswrapper[4820]: W0221 08:29:03.374565 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478142ab_f7fa_4bbd_9051_6d1f5e16a9e2.slice/crio-20bf11746dffbbaaaa0e2aad736170457d25f5063b79c03ae7a10ec2e2b7b6c3 WatchSource:0}: Error finding container 20bf11746dffbbaaaa0e2aad736170457d25f5063b79c03ae7a10ec2e2b7b6c3: Status 404 returned error can't find the container with id 20bf11746dffbbaaaa0e2aad736170457d25f5063b79c03ae7a10ec2e2b7b6c3 Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.374774 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.519824 4820 generic.go:334] "Generic (PLEG): container finished" podID="cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" containerID="633afaacce752e65a5261410e5e1ea5326c34bca69f027a178d324465a8a3bac" exitCode=0 Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.520206 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qk6xf" event={"ID":"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a","Type":"ContainerDied","Data":"633afaacce752e65a5261410e5e1ea5326c34bca69f027a178d324465a8a3bac"} Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.522709 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2","Type":"ContainerStarted","Data":"20bf11746dffbbaaaa0e2aad736170457d25f5063b79c03ae7a10ec2e2b7b6c3"} Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.589057 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.589382 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-central-agent" containerID="cri-o://137708a2f124c1e1d52df3c243cda8bcc10f1d8f0867fea3c2c60d674a9293be" gracePeriod=30 Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.589452 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="proxy-httpd" containerID="cri-o://5d40695bfcfa3209edf5615a34cf423258d7b6777c0a391627147abfb464e973" gracePeriod=30 Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.589501 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="sg-core" containerID="cri-o://dc63bc01e75861e72cb0c1d7c880c6b18870394641adb9e882bcc3de7204be7f" gracePeriod=30 Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.589514 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-notification-agent" containerID="cri-o://5fbdf4c2b857c36d154c8be63ea4d0db344d745400f1a3617f7fffb564dcdb5e" gracePeriod=30 Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.696973 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:29:03 crc kubenswrapper[4820]: E0221 08:29:03.697208 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.716097 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153a0123-545b-4694-8e22-ef2a97ec9939" path="/var/lib/kubelet/pods/153a0123-545b-4694-8e22-ef2a97ec9939/volumes" Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.536021 4820 generic.go:334] "Generic (PLEG): container finished" podID="042d4af3-fd72-450a-a2e8-e296886b495a" containerID="5d40695bfcfa3209edf5615a34cf423258d7b6777c0a391627147abfb464e973" exitCode=0 Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.536378 4820 generic.go:334] "Generic (PLEG): container finished" podID="042d4af3-fd72-450a-a2e8-e296886b495a" containerID="dc63bc01e75861e72cb0c1d7c880c6b18870394641adb9e882bcc3de7204be7f" exitCode=2 Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.536388 4820 generic.go:334] "Generic (PLEG): container finished" podID="042d4af3-fd72-450a-a2e8-e296886b495a" containerID="137708a2f124c1e1d52df3c243cda8bcc10f1d8f0867fea3c2c60d674a9293be" exitCode=0 Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.536192 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerDied","Data":"5d40695bfcfa3209edf5615a34cf423258d7b6777c0a391627147abfb464e973"} Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.536458 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerDied","Data":"dc63bc01e75861e72cb0c1d7c880c6b18870394641adb9e882bcc3de7204be7f"} Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.536469 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerDied","Data":"137708a2f124c1e1d52df3c243cda8bcc10f1d8f0867fea3c2c60d674a9293be"} Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.538388 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2","Type":"ContainerStarted","Data":"d017a2f457f78c82679e61c7b4f8bf88dada53ee987f76e2dd4337fc205ace8b"} Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.538428 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.558947 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.7493413389999999 podStartE2EDuration="2.558919364s" podCreationTimestamp="2026-02-21 08:29:02 +0000 UTC" firstStartedPulling="2026-02-21 08:29:03.378122346 +0000 UTC m=+6118.411206544" lastFinishedPulling="2026-02-21 08:29:04.187700371 +0000 UTC m=+6119.220784569" observedRunningTime="2026-02-21 08:29:04.557505536 +0000 UTC m=+6119.590589744" watchObservedRunningTime="2026-02-21 08:29:04.558919364 +0000 UTC m=+6119.592003562" Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.909109 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.033333 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-combined-ca-bundle\") pod \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.033463 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-scripts\") pod \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.033551 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jk6n\" (UniqueName: \"kubernetes.io/projected/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-kube-api-access-2jk6n\") pod \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.033806 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-config-data\") pod \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.039369 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-scripts" (OuterVolumeSpecName: "scripts") pod "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" (UID: "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.039935 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-kube-api-access-2jk6n" (OuterVolumeSpecName: "kube-api-access-2jk6n") pod "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" (UID: "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a"). InnerVolumeSpecName "kube-api-access-2jk6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.063463 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" (UID: "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.066090 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-config-data" (OuterVolumeSpecName: "config-data") pod "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" (UID: "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.136425 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jk6n\" (UniqueName: \"kubernetes.io/projected/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-kube-api-access-2jk6n\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.136465 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.136474 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.136483 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.549044 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qk6xf" event={"ID":"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a","Type":"ContainerDied","Data":"e81b9ea598682aec6afdba616cb1ce1a5280dd01172f6b453d9afb8515a2a252"} Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.549098 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e81b9ea598682aec6afdba616cb1ce1a5280dd01172f6b453d9afb8515a2a252" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.549060 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.553109 4820 generic.go:334] "Generic (PLEG): container finished" podID="042d4af3-fd72-450a-a2e8-e296886b495a" containerID="5fbdf4c2b857c36d154c8be63ea4d0db344d745400f1a3617f7fffb564dcdb5e" exitCode=0 Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.553186 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerDied","Data":"5fbdf4c2b857c36d154c8be63ea4d0db344d745400f1a3617f7fffb564dcdb5e"} Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.941611 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.056631 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-sg-core-conf-yaml\") pod \"042d4af3-fd72-450a-a2e8-e296886b495a\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.056982 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26m8j\" (UniqueName: \"kubernetes.io/projected/042d4af3-fd72-450a-a2e8-e296886b495a-kube-api-access-26m8j\") pod \"042d4af3-fd72-450a-a2e8-e296886b495a\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.057042 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-scripts\") pod \"042d4af3-fd72-450a-a2e8-e296886b495a\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.057138 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-config-data\") pod \"042d4af3-fd72-450a-a2e8-e296886b495a\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.057198 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-combined-ca-bundle\") pod \"042d4af3-fd72-450a-a2e8-e296886b495a\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.057271 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-log-httpd\") pod \"042d4af3-fd72-450a-a2e8-e296886b495a\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.057443 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-run-httpd\") pod \"042d4af3-fd72-450a-a2e8-e296886b495a\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.058321 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "042d4af3-fd72-450a-a2e8-e296886b495a" (UID: "042d4af3-fd72-450a-a2e8-e296886b495a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.059976 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "042d4af3-fd72-450a-a2e8-e296886b495a" (UID: "042d4af3-fd72-450a-a2e8-e296886b495a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.063153 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042d4af3-fd72-450a-a2e8-e296886b495a-kube-api-access-26m8j" (OuterVolumeSpecName: "kube-api-access-26m8j") pod "042d4af3-fd72-450a-a2e8-e296886b495a" (UID: "042d4af3-fd72-450a-a2e8-e296886b495a"). InnerVolumeSpecName "kube-api-access-26m8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.069735 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-scripts" (OuterVolumeSpecName: "scripts") pod "042d4af3-fd72-450a-a2e8-e296886b495a" (UID: "042d4af3-fd72-450a-a2e8-e296886b495a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.088314 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "042d4af3-fd72-450a-a2e8-e296886b495a" (UID: "042d4af3-fd72-450a-a2e8-e296886b495a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.153439 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-config-data" (OuterVolumeSpecName: "config-data") pod "042d4af3-fd72-450a-a2e8-e296886b495a" (UID: "042d4af3-fd72-450a-a2e8-e296886b495a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.154648 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "042d4af3-fd72-450a-a2e8-e296886b495a" (UID: "042d4af3-fd72-450a-a2e8-e296886b495a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.160266 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.160320 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.160332 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26m8j\" (UniqueName: \"kubernetes.io/projected/042d4af3-fd72-450a-a2e8-e296886b495a-kube-api-access-26m8j\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.160341 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.160349 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.160357 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.160368 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.565716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerDied","Data":"237af00766cb3ac668153a70322a571c05a31fa10748184013c7aedd5f203ded"} Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.565813 4820 scope.go:117] "RemoveContainer" containerID="5d40695bfcfa3209edf5615a34cf423258d7b6777c0a391627147abfb464e973" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.565905 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.596751 4820 scope.go:117] "RemoveContainer" containerID="dc63bc01e75861e72cb0c1d7c880c6b18870394641adb9e882bcc3de7204be7f" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.610589 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.623884 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.624701 4820 scope.go:117] "RemoveContainer" containerID="5fbdf4c2b857c36d154c8be63ea4d0db344d745400f1a3617f7fffb564dcdb5e" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.636121 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:06 crc kubenswrapper[4820]: E0221 08:29:06.636670 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-central-agent" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.636692 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-central-agent" Feb 21 08:29:06 crc kubenswrapper[4820]: E0221 08:29:06.636718 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-notification-agent" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.636727 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-notification-agent" Feb 21 08:29:06 crc kubenswrapper[4820]: E0221 08:29:06.636745 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="proxy-httpd" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.636756 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="proxy-httpd" Feb 21 08:29:06 crc kubenswrapper[4820]: E0221 08:29:06.636778 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="sg-core" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.636786 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="sg-core" Feb 21 08:29:06 crc kubenswrapper[4820]: E0221 08:29:06.636803 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" containerName="aodh-db-sync" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.636813 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" containerName="aodh-db-sync" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.637046 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-central-agent" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.637064 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" containerName="aodh-db-sync" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.637085 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="proxy-httpd" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.637102 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="sg-core" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.637117 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-notification-agent" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.639154 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.643277 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.643423 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.643855 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.647520 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.664364 4820 scope.go:117] "RemoveContainer" containerID="137708a2f124c1e1d52df3c243cda8bcc10f1d8f0867fea3c2c60d674a9293be" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669374 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669487 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-scripts\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669560 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-run-httpd\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669645 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-log-httpd\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669694 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz6km\" (UniqueName: \"kubernetes.io/projected/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-kube-api-access-pz6km\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669756 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-config-data\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669827 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669934 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772274 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-run-httpd\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-log-httpd\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772457 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz6km\" (UniqueName: \"kubernetes.io/projected/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-kube-api-access-pz6km\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-config-data\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772599 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772666 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772765 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772792 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-scripts\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772879 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-run-httpd\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.773597 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-log-httpd\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.777422 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-config-data\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.778133 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.778305 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-scripts\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.778689 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.779050 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.788546 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz6km\" (UniqueName: \"kubernetes.io/projected/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-kube-api-access-pz6km\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.970196 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:07 crc kubenswrapper[4820]: W0221 08:29:07.417224 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf10eadaa_dba6_443d_8cc4_fd1604d40ac1.slice/crio-548ed111805156a09dd101ff0d5f5513b29da6d0a499b57161ac65c4d61fe4ef WatchSource:0}: Error finding container 548ed111805156a09dd101ff0d5f5513b29da6d0a499b57161ac65c4d61fe4ef: Status 404 returned error can't find the container with id 548ed111805156a09dd101ff0d5f5513b29da6d0a499b57161ac65c4d61fe4ef Feb 21 08:29:07 crc kubenswrapper[4820]: I0221 08:29:07.417659 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:07 crc kubenswrapper[4820]: I0221 08:29:07.585195 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerStarted","Data":"548ed111805156a09dd101ff0d5f5513b29da6d0a499b57161ac65c4d61fe4ef"} Feb 21 08:29:07 crc kubenswrapper[4820]: I0221 08:29:07.710339 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" path="/var/lib/kubelet/pods/042d4af3-fd72-450a-a2e8-e296886b495a/volumes" Feb 21 08:29:08 crc kubenswrapper[4820]: I0221 08:29:08.597308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerStarted","Data":"75e74cfe50d63cfa3d23eb3c164b3839057766f95257d111da1d1bfc750170db"} Feb 21 08:29:08 crc kubenswrapper[4820]: I0221 08:29:08.597731 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerStarted","Data":"f39d4192f8dc890e8590645c040beb632e176cff838f9e68b810d611d1b5e7f6"} Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.608803 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerStarted","Data":"0788b237a69c54fccab91ec77aae8ace4661556d9cd7128edf8a9304b468ce0a"} Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.820304 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.842649 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.894289 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.895781 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-57b2p" Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.896276 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.898383 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.957523 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l6st\" (UniqueName: \"kubernetes.io/projected/38a5221c-e05a-457c-a5d1-5c0404422efb-kube-api-access-9l6st\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.958078 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-combined-ca-bundle\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.958126 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-scripts\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.958182 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-config-data\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.085484 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l6st\" (UniqueName: \"kubernetes.io/projected/38a5221c-e05a-457c-a5d1-5c0404422efb-kube-api-access-9l6st\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.085647 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-combined-ca-bundle\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.085677 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-scripts\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.085710 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-config-data\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.093537 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-config-data\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.094580 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-combined-ca-bundle\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.094694 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-scripts\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.108863 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l6st\" (UniqueName: \"kubernetes.io/projected/38a5221c-e05a-457c-a5d1-5c0404422efb-kube-api-access-9l6st\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.280999 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.837047 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:11 crc kubenswrapper[4820]: I0221 08:29:11.632998 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerStarted","Data":"1a3c7711b7269297ace25a11d548c2a544aad86593b534338124d236ec2bcd4f"} Feb 21 08:29:11 crc kubenswrapper[4820]: I0221 08:29:11.633493 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 08:29:11 crc kubenswrapper[4820]: I0221 08:29:11.634784 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerStarted","Data":"e9135352833924be297cd8ff1b5a93a442c64c957f9c539dcdc530e2b1a66bc5"} Feb 21 08:29:11 crc kubenswrapper[4820]: I0221 08:29:11.634839 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerStarted","Data":"9d9fd65faad6e1c4ee6d0dde3d20356e1e09d8c8be56847a345653bfe39e1ea5"} Feb 21 08:29:11 crc kubenswrapper[4820]: I0221 08:29:11.663866 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.752305153 podStartE2EDuration="5.66384671s" podCreationTimestamp="2026-02-21 08:29:06 +0000 UTC" firstStartedPulling="2026-02-21 08:29:07.41995691 +0000 UTC m=+6122.453041108" lastFinishedPulling="2026-02-21 08:29:10.331498467 +0000 UTC m=+6125.364582665" observedRunningTime="2026-02-21 08:29:11.658948128 +0000 UTC m=+6126.692032366" watchObservedRunningTime="2026-02-21 08:29:11.66384671 +0000 UTC m=+6126.696930908" Feb 21 08:29:12 crc kubenswrapper[4820]: I0221 08:29:12.917831 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 21 08:29:13 crc kubenswrapper[4820]: I0221 08:29:13.272802 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:13 crc kubenswrapper[4820]: I0221 08:29:13.378811 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:13 crc kubenswrapper[4820]: I0221 08:29:13.664296 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-central-agent" containerID="cri-o://f39d4192f8dc890e8590645c040beb632e176cff838f9e68b810d611d1b5e7f6" gracePeriod=30 Feb 21 08:29:13 crc kubenswrapper[4820]: I0221 08:29:13.665041 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerStarted","Data":"5c26eca5a269ff7b5e371cbfd29b74050e75b8dfad4d8c863b640758e905ec60"} Feb 21 08:29:13 crc kubenswrapper[4820]: I0221 08:29:13.665554 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="proxy-httpd" containerID="cri-o://1a3c7711b7269297ace25a11d548c2a544aad86593b534338124d236ec2bcd4f" gracePeriod=30 Feb 21 08:29:13 crc kubenswrapper[4820]: I0221 08:29:13.665700 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="sg-core" containerID="cri-o://0788b237a69c54fccab91ec77aae8ace4661556d9cd7128edf8a9304b468ce0a" gracePeriod=30 Feb 21 08:29:13 crc kubenswrapper[4820]: I0221 08:29:13.665823 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-notification-agent" containerID="cri-o://75e74cfe50d63cfa3d23eb3c164b3839057766f95257d111da1d1bfc750170db" gracePeriod=30 Feb 21 08:29:14 crc kubenswrapper[4820]: I0221 08:29:14.679140 4820 generic.go:334] "Generic (PLEG): container finished" podID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerID="1a3c7711b7269297ace25a11d548c2a544aad86593b534338124d236ec2bcd4f" exitCode=0 Feb 21 08:29:14 crc kubenswrapper[4820]: I0221 08:29:14.679182 4820 generic.go:334] "Generic (PLEG): container finished" podID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerID="0788b237a69c54fccab91ec77aae8ace4661556d9cd7128edf8a9304b468ce0a" exitCode=2 Feb 21 08:29:14 crc kubenswrapper[4820]: I0221 08:29:14.679193 4820 generic.go:334] "Generic (PLEG): container finished" podID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerID="75e74cfe50d63cfa3d23eb3c164b3839057766f95257d111da1d1bfc750170db" exitCode=0 Feb 21 08:29:14 crc kubenswrapper[4820]: I0221 08:29:14.679216 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerDied","Data":"1a3c7711b7269297ace25a11d548c2a544aad86593b534338124d236ec2bcd4f"} Feb 21 08:29:14 crc kubenswrapper[4820]: I0221 08:29:14.679330 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerDied","Data":"0788b237a69c54fccab91ec77aae8ace4661556d9cd7128edf8a9304b468ce0a"} Feb 21 08:29:14 crc kubenswrapper[4820]: I0221 08:29:14.679348 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerDied","Data":"75e74cfe50d63cfa3d23eb3c164b3839057766f95257d111da1d1bfc750170db"} Feb 21 08:29:14 crc kubenswrapper[4820]: I0221 08:29:14.697713 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:29:14 crc kubenswrapper[4820]: E0221 08:29:14.698004 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.726950 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerStarted","Data":"daf57cacaeae4da4b902644cac1afb13db7f6e91f4829154a92335d8e2720bb9"} Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.738690 4820 generic.go:334] "Generic (PLEG): container finished" podID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerID="f39d4192f8dc890e8590645c040beb632e176cff838f9e68b810d611d1b5e7f6" exitCode=0 Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.738753 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerDied","Data":"f39d4192f8dc890e8590645c040beb632e176cff838f9e68b810d611d1b5e7f6"} Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.875197 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.955716 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-run-httpd\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.956144 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.956314 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz6km\" (UniqueName: \"kubernetes.io/projected/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-kube-api-access-pz6km\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.956496 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-sg-core-conf-yaml\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.956692 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-ceilometer-tls-certs\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.956808 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-config-data\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.956961 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-scripts\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.957170 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-combined-ca-bundle\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.957827 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-log-httpd\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.958152 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.958651 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.958734 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.962188 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-scripts" (OuterVolumeSpecName: "scripts") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.962911 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-kube-api-access-pz6km" (OuterVolumeSpecName: "kube-api-access-pz6km") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "kube-api-access-pz6km". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.991587 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.014573 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.040825 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.060214 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.060262 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.060273 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.060281 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz6km\" (UniqueName: \"kubernetes.io/projected/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-kube-api-access-pz6km\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.060290 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.065752 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-config-data" (OuterVolumeSpecName: "config-data") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.162212 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.753399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerDied","Data":"548ed111805156a09dd101ff0d5f5513b29da6d0a499b57161ac65c4d61fe4ef"} Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.753793 4820 scope.go:117] "RemoveContainer" containerID="1a3c7711b7269297ace25a11d548c2a544aad86593b534338124d236ec2bcd4f" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.753481 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.803004 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.817977 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.831193 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:18 crc kubenswrapper[4820]: E0221 08:29:18.831729 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-central-agent" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.831750 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-central-agent" Feb 21 08:29:18 crc kubenswrapper[4820]: E0221 08:29:18.831984 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-notification-agent" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.832699 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-notification-agent" Feb 21 08:29:18 crc kubenswrapper[4820]: E0221 08:29:18.832768 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="proxy-httpd" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.832789 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="proxy-httpd" Feb 21 08:29:18 crc kubenswrapper[4820]: E0221 08:29:18.832808 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="sg-core" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.832817 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="sg-core" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.833060 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="sg-core" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.833089 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-central-agent" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.833104 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-notification-agent" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.833117 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="proxy-httpd" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.843633 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.843735 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.861620 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.862887 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.869485 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983266 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26462812-349d-4dc0-ac4b-3d89ebeb997c-run-httpd\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983347 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983400 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdpkg\" (UniqueName: \"kubernetes.io/projected/26462812-349d-4dc0-ac4b-3d89ebeb997c-kube-api-access-sdpkg\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983461 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983520 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-config-data\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983563 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26462812-349d-4dc0-ac4b-3d89ebeb997c-log-httpd\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983591 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983655 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-scripts\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.085826 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-scripts\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.085940 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26462812-349d-4dc0-ac4b-3d89ebeb997c-run-httpd\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.086565 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26462812-349d-4dc0-ac4b-3d89ebeb997c-run-httpd\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.086729 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.086834 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdpkg\" (UniqueName: \"kubernetes.io/projected/26462812-349d-4dc0-ac4b-3d89ebeb997c-kube-api-access-sdpkg\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.086924 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.087010 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-config-data\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.087088 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26462812-349d-4dc0-ac4b-3d89ebeb997c-log-httpd\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.087128 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.089718 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26462812-349d-4dc0-ac4b-3d89ebeb997c-log-httpd\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.099136 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-config-data\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.099789 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-scripts\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.100107 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.100439 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.100893 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.115115 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdpkg\" (UniqueName: \"kubernetes.io/projected/26462812-349d-4dc0-ac4b-3d89ebeb997c-kube-api-access-sdpkg\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.199541 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.511453 4820 scope.go:117] "RemoveContainer" containerID="0788b237a69c54fccab91ec77aae8ace4661556d9cd7128edf8a9304b468ce0a" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.541255 4820 scope.go:117] "RemoveContainer" containerID="75e74cfe50d63cfa3d23eb3c164b3839057766f95257d111da1d1bfc750170db" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.643775 4820 scope.go:117] "RemoveContainer" containerID="f39d4192f8dc890e8590645c040beb632e176cff838f9e68b810d611d1b5e7f6" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.714092 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" path="/var/lib/kubelet/pods/f10eadaa-dba6-443d-8cc4-fd1604d40ac1/volumes" Feb 21 08:29:20 crc kubenswrapper[4820]: I0221 08:29:20.046598 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-8fv99"] Feb 21 08:29:20 crc kubenswrapper[4820]: I0221 08:29:20.055934 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-8fv99"] Feb 21 08:29:20 crc kubenswrapper[4820]: I0221 08:29:20.599694 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:20 crc kubenswrapper[4820]: I0221 08:29:20.784447 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26462812-349d-4dc0-ac4b-3d89ebeb997c","Type":"ContainerStarted","Data":"88458e5466698b27a94da126f9321d84278fb0967bc046e146ba87624b508dfe"} Feb 21 08:29:21 crc kubenswrapper[4820]: I0221 08:29:21.031644 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9480-account-create-update-bpvlj"] Feb 21 08:29:21 crc kubenswrapper[4820]: I0221 08:29:21.042533 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9480-account-create-update-bpvlj"] Feb 21 08:29:21 crc kubenswrapper[4820]: I0221 08:29:21.712112 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="549ebe18-2d08-41b5-ac23-2321a43dfe38" path="/var/lib/kubelet/pods/549ebe18-2d08-41b5-ac23-2321a43dfe38/volumes" Feb 21 08:29:21 crc kubenswrapper[4820]: I0221 08:29:21.713290 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f96e017-4a70-45ac-9d44-b57829510e53" path="/var/lib/kubelet/pods/8f96e017-4a70-45ac-9d44-b57829510e53/volumes" Feb 21 08:29:22 crc kubenswrapper[4820]: I0221 08:29:22.802392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26462812-349d-4dc0-ac4b-3d89ebeb997c","Type":"ContainerStarted","Data":"2347cc70a093f9c6de2675c539be996f97a6480bbc37adc1f7822b6ae412ea70"} Feb 21 08:29:22 crc kubenswrapper[4820]: I0221 08:29:22.805077 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerStarted","Data":"9965ea29574cc272a65bd3cefce4b12ee4617b4a81b40c790f6d6b20541879cf"} Feb 21 08:29:23 crc kubenswrapper[4820]: I0221 08:29:23.811971 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-api" containerID="cri-o://e9135352833924be297cd8ff1b5a93a442c64c957f9c539dcdc530e2b1a66bc5" gracePeriod=30 Feb 21 08:29:23 crc kubenswrapper[4820]: I0221 08:29:23.812023 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-listener" containerID="cri-o://9965ea29574cc272a65bd3cefce4b12ee4617b4a81b40c790f6d6b20541879cf" gracePeriod=30 Feb 21 08:29:23 crc kubenswrapper[4820]: I0221 08:29:23.812045 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-notifier" containerID="cri-o://daf57cacaeae4da4b902644cac1afb13db7f6e91f4829154a92335d8e2720bb9" gracePeriod=30 Feb 21 08:29:23 crc kubenswrapper[4820]: I0221 08:29:23.812058 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-evaluator" containerID="cri-o://5c26eca5a269ff7b5e371cbfd29b74050e75b8dfad4d8c863b640758e905ec60" gracePeriod=30 Feb 21 08:29:23 crc kubenswrapper[4820]: I0221 08:29:23.845977 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.208344101 podStartE2EDuration="14.84594849s" podCreationTimestamp="2026-02-21 08:29:09 +0000 UTC" firstStartedPulling="2026-02-21 08:29:10.838192021 +0000 UTC m=+6125.871276229" lastFinishedPulling="2026-02-21 08:29:22.47579642 +0000 UTC m=+6137.508880618" observedRunningTime="2026-02-21 08:29:23.831050349 +0000 UTC m=+6138.864134547" watchObservedRunningTime="2026-02-21 08:29:23.84594849 +0000 UTC m=+6138.879032688" Feb 21 08:29:24 crc kubenswrapper[4820]: I0221 08:29:24.840167 4820 generic.go:334] "Generic (PLEG): container finished" podID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerID="5c26eca5a269ff7b5e371cbfd29b74050e75b8dfad4d8c863b640758e905ec60" exitCode=0 Feb 21 08:29:24 crc kubenswrapper[4820]: I0221 08:29:24.840539 4820 generic.go:334] "Generic (PLEG): container finished" podID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerID="e9135352833924be297cd8ff1b5a93a442c64c957f9c539dcdc530e2b1a66bc5" exitCode=0 Feb 21 08:29:24 crc kubenswrapper[4820]: I0221 08:29:24.840559 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerDied","Data":"5c26eca5a269ff7b5e371cbfd29b74050e75b8dfad4d8c863b640758e905ec60"} Feb 21 08:29:24 crc kubenswrapper[4820]: I0221 08:29:24.840650 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerDied","Data":"e9135352833924be297cd8ff1b5a93a442c64c957f9c539dcdc530e2b1a66bc5"} Feb 21 08:29:24 crc kubenswrapper[4820]: I0221 08:29:24.843633 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26462812-349d-4dc0-ac4b-3d89ebeb997c","Type":"ContainerStarted","Data":"94594533bd585a6a58306b93861b0d9521919f0ee210a0c108560d7b84cd6ba1"} Feb 21 08:29:28 crc kubenswrapper[4820]: I0221 08:29:28.696968 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:29:28 crc kubenswrapper[4820]: E0221 08:29:28.697801 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:29:29 crc kubenswrapper[4820]: I0221 08:29:29.891886 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26462812-349d-4dc0-ac4b-3d89ebeb997c","Type":"ContainerStarted","Data":"d302a721e767d9b472de66d5a5f5d61bb9b34defaa62caf2e3c8972b81687b38"} Feb 21 08:29:30 crc kubenswrapper[4820]: I0221 08:29:30.040811 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-v696w"] Feb 21 08:29:30 crc kubenswrapper[4820]: I0221 08:29:30.055347 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-v696w"] Feb 21 08:29:32 crc kubenswrapper[4820]: I0221 08:29:32.311383 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ffe0144-e67b-4ea7-8212-5989f992997e" path="/var/lib/kubelet/pods/8ffe0144-e67b-4ea7-8212-5989f992997e/volumes" Feb 21 08:29:33 crc kubenswrapper[4820]: I0221 08:29:33.994641 4820 scope.go:117] "RemoveContainer" containerID="d559368b0d2930ebf44224fc90536866334fa2342759e67f4d25212eb003ee23" Feb 21 08:29:34 crc kubenswrapper[4820]: I0221 08:29:34.034475 4820 scope.go:117] "RemoveContainer" containerID="5b643310775fbc512d74f27daced1ed65eb8590a166407d6e244cc44ba3b9077" Feb 21 08:29:34 crc kubenswrapper[4820]: I0221 08:29:34.252981 4820 scope.go:117] "RemoveContainer" containerID="0c4429cc6df30d2e093692bf4cbd7627086a28c710ac6ad90f897b0cf49fd1d6" Feb 21 08:29:34 crc kubenswrapper[4820]: I0221 08:29:34.303740 4820 scope.go:117] "RemoveContainer" containerID="f93d7049647bbd8ed3612333a8d08a0df9aca74de7fd44b0b8ebf76d66d50711" Feb 21 08:29:34 crc kubenswrapper[4820]: I0221 08:29:34.950727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26462812-349d-4dc0-ac4b-3d89ebeb997c","Type":"ContainerStarted","Data":"f56913eae09efe6a1d1c4b9a3a343efb64bdcc32cc33cc86a07d6b75b4b4abdb"} Feb 21 08:29:34 crc kubenswrapper[4820]: I0221 08:29:34.951205 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 08:29:34 crc kubenswrapper[4820]: I0221 08:29:34.978715 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.553384932 podStartE2EDuration="16.978692463s" podCreationTimestamp="2026-02-21 08:29:18 +0000 UTC" firstStartedPulling="2026-02-21 08:29:20.609246091 +0000 UTC m=+6135.642330289" lastFinishedPulling="2026-02-21 08:29:34.034553622 +0000 UTC m=+6149.067637820" observedRunningTime="2026-02-21 08:29:34.971740156 +0000 UTC m=+6150.004824354" watchObservedRunningTime="2026-02-21 08:29:34.978692463 +0000 UTC m=+6150.011776671" Feb 21 08:29:39 crc kubenswrapper[4820]: I0221 08:29:39.697699 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:29:39 crc kubenswrapper[4820]: E0221 08:29:39.698683 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:29:49 crc kubenswrapper[4820]: I0221 08:29:49.213293 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.148744 4820 generic.go:334] "Generic (PLEG): container finished" podID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerID="9965ea29574cc272a65bd3cefce4b12ee4617b4a81b40c790f6d6b20541879cf" exitCode=137 Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.149325 4820 generic.go:334] "Generic (PLEG): container finished" podID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerID="daf57cacaeae4da4b902644cac1afb13db7f6e91f4829154a92335d8e2720bb9" exitCode=137 Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.148944 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerDied","Data":"9965ea29574cc272a65bd3cefce4b12ee4617b4a81b40c790f6d6b20541879cf"} Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.149371 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerDied","Data":"daf57cacaeae4da4b902644cac1afb13db7f6e91f4829154a92335d8e2720bb9"} Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.386136 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.411519 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-scripts\") pod \"38a5221c-e05a-457c-a5d1-5c0404422efb\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.411884 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-combined-ca-bundle\") pod \"38a5221c-e05a-457c-a5d1-5c0404422efb\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.411995 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l6st\" (UniqueName: \"kubernetes.io/projected/38a5221c-e05a-457c-a5d1-5c0404422efb-kube-api-access-9l6st\") pod \"38a5221c-e05a-457c-a5d1-5c0404422efb\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.412295 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-config-data\") pod \"38a5221c-e05a-457c-a5d1-5c0404422efb\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.460354 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a5221c-e05a-457c-a5d1-5c0404422efb-kube-api-access-9l6st" (OuterVolumeSpecName: "kube-api-access-9l6st") pod "38a5221c-e05a-457c-a5d1-5c0404422efb" (UID: "38a5221c-e05a-457c-a5d1-5c0404422efb"). InnerVolumeSpecName "kube-api-access-9l6st". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.460941 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-scripts" (OuterVolumeSpecName: "scripts") pod "38a5221c-e05a-457c-a5d1-5c0404422efb" (UID: "38a5221c-e05a-457c-a5d1-5c0404422efb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.516868 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.516912 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l6st\" (UniqueName: \"kubernetes.io/projected/38a5221c-e05a-457c-a5d1-5c0404422efb-kube-api-access-9l6st\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.544160 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38a5221c-e05a-457c-a5d1-5c0404422efb" (UID: "38a5221c-e05a-457c-a5d1-5c0404422efb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.552958 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-config-data" (OuterVolumeSpecName: "config-data") pod "38a5221c-e05a-457c-a5d1-5c0404422efb" (UID: "38a5221c-e05a-457c-a5d1-5c0404422efb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.619037 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.619080 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.696382 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:29:54 crc kubenswrapper[4820]: E0221 08:29:54.696843 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.160009 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerDied","Data":"9d9fd65faad6e1c4ee6d0dde3d20356e1e09d8c8be56847a345653bfe39e1ea5"} Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.160364 4820 scope.go:117] "RemoveContainer" containerID="9965ea29574cc272a65bd3cefce4b12ee4617b4a81b40c790f6d6b20541879cf" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.160062 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.188142 4820 scope.go:117] "RemoveContainer" containerID="daf57cacaeae4da4b902644cac1afb13db7f6e91f4829154a92335d8e2720bb9" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.198794 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.208569 4820 scope.go:117] "RemoveContainer" containerID="5c26eca5a269ff7b5e371cbfd29b74050e75b8dfad4d8c863b640758e905ec60" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.215964 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.229404 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:55 crc kubenswrapper[4820]: E0221 08:29:55.230019 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-evaluator" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230039 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-evaluator" Feb 21 08:29:55 crc kubenswrapper[4820]: E0221 08:29:55.230056 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-notifier" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230066 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-notifier" Feb 21 08:29:55 crc kubenswrapper[4820]: E0221 08:29:55.230118 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-api" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230127 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-api" Feb 21 08:29:55 crc kubenswrapper[4820]: E0221 08:29:55.230137 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-listener" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230145 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-listener" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230380 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-notifier" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230403 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-api" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230423 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-evaluator" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230442 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-listener" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.232795 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.236854 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.237011 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-57b2p" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.237144 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.237962 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.238147 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.245596 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.251180 4820 scope.go:117] "RemoveContainer" containerID="e9135352833924be297cd8ff1b5a93a442c64c957f9c539dcdc530e2b1a66bc5" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.338832 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-combined-ca-bundle\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.338881 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-scripts\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.339087 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-config-data\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.339289 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tsqv\" (UniqueName: \"kubernetes.io/projected/77710997-adc1-48de-a5bd-d2e00959d510-kube-api-access-9tsqv\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.339338 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-internal-tls-certs\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.339384 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-public-tls-certs\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.441864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-config-data\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.441953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tsqv\" (UniqueName: \"kubernetes.io/projected/77710997-adc1-48de-a5bd-d2e00959d510-kube-api-access-9tsqv\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.441982 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-internal-tls-certs\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.442007 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-public-tls-certs\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.442079 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-combined-ca-bundle\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.442102 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-scripts\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.446203 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-combined-ca-bundle\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.446655 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-config-data\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.448793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-public-tls-certs\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.454844 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-scripts\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.455106 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-internal-tls-certs\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.462063 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tsqv\" (UniqueName: \"kubernetes.io/projected/77710997-adc1-48de-a5bd-d2e00959d510-kube-api-access-9tsqv\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.550402 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.721359 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" path="/var/lib/kubelet/pods/38a5221c-e05a-457c-a5d1-5c0404422efb/volumes" Feb 21 08:29:56 crc kubenswrapper[4820]: I0221 08:29:56.038123 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:56 crc kubenswrapper[4820]: I0221 08:29:56.174961 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"77710997-adc1-48de-a5bd-d2e00959d510","Type":"ContainerStarted","Data":"d29cd41e741c75118675a0b2085bec3983c242922e1153955ba454aa59846ce6"} Feb 21 08:29:57 crc kubenswrapper[4820]: I0221 08:29:57.193948 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"77710997-adc1-48de-a5bd-d2e00959d510","Type":"ContainerStarted","Data":"d52bbb6dff1dad3f9639abd9f75ec1a329eabfd6d40382a99e345c722e43e137"} Feb 21 08:29:57 crc kubenswrapper[4820]: I0221 08:29:57.194351 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"77710997-adc1-48de-a5bd-d2e00959d510","Type":"ContainerStarted","Data":"1cf024d079993ee0570e2bceb6693026887e61ade71518c5f06afabf03ab2d9f"} Feb 21 08:29:58 crc kubenswrapper[4820]: I0221 08:29:58.206984 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"77710997-adc1-48de-a5bd-d2e00959d510","Type":"ContainerStarted","Data":"d6d1189259787e3b26084b33f304875c89d97bb52f6b34d01017989254e26ebf"} Feb 21 08:29:58 crc kubenswrapper[4820]: I0221 08:29:58.207390 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"77710997-adc1-48de-a5bd-d2e00959d510","Type":"ContainerStarted","Data":"6f5db994a98dfd6cb0a46770d8ad85bb3777b7c801b6fa47f3c9049cfe541df6"} Feb 21 08:29:58 crc kubenswrapper[4820]: I0221 08:29:58.247758 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.9273835639999999 podStartE2EDuration="3.247733413s" podCreationTimestamp="2026-02-21 08:29:55 +0000 UTC" firstStartedPulling="2026-02-21 08:29:56.03213079 +0000 UTC m=+6171.065214988" lastFinishedPulling="2026-02-21 08:29:57.352480639 +0000 UTC m=+6172.385564837" observedRunningTime="2026-02-21 08:29:58.239493171 +0000 UTC m=+6173.272577389" watchObservedRunningTime="2026-02-21 08:29:58.247733413 +0000 UTC m=+6173.280817611" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.173639 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw"] Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.178182 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.180131 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.180344 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.192857 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw"] Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.249402 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bnms\" (UniqueName: \"kubernetes.io/projected/b7930cbc-54a2-4fed-8153-27bb0a44221d-kube-api-access-7bnms\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.249457 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7930cbc-54a2-4fed-8153-27bb0a44221d-secret-volume\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.249589 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7930cbc-54a2-4fed-8153-27bb0a44221d-config-volume\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.352750 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7930cbc-54a2-4fed-8153-27bb0a44221d-config-volume\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.353021 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bnms\" (UniqueName: \"kubernetes.io/projected/b7930cbc-54a2-4fed-8153-27bb0a44221d-kube-api-access-7bnms\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.353074 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7930cbc-54a2-4fed-8153-27bb0a44221d-secret-volume\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.353884 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7930cbc-54a2-4fed-8153-27bb0a44221d-config-volume\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.373795 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7930cbc-54a2-4fed-8153-27bb0a44221d-secret-volume\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.380593 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bnms\" (UniqueName: \"kubernetes.io/projected/b7930cbc-54a2-4fed-8153-27bb0a44221d-kube-api-access-7bnms\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.504708 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.975410 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw"] Feb 21 08:30:00 crc kubenswrapper[4820]: W0221 08:30:00.976922 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7930cbc_54a2_4fed_8153_27bb0a44221d.slice/crio-3e6b7fbd4360dd59305b5067fc8432310bbf734b1a8a2ede0ed6b2f438e52fc8 WatchSource:0}: Error finding container 3e6b7fbd4360dd59305b5067fc8432310bbf734b1a8a2ede0ed6b2f438e52fc8: Status 404 returned error can't find the container with id 3e6b7fbd4360dd59305b5067fc8432310bbf734b1a8a2ede0ed6b2f438e52fc8 Feb 21 08:30:01 crc kubenswrapper[4820]: I0221 08:30:01.236785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" event={"ID":"b7930cbc-54a2-4fed-8153-27bb0a44221d","Type":"ContainerStarted","Data":"bedec9e828a462a9d7d9e96d01cf5a9452a72b80e424a2bc7656e332167d5caf"} Feb 21 08:30:01 crc kubenswrapper[4820]: I0221 08:30:01.237093 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" event={"ID":"b7930cbc-54a2-4fed-8153-27bb0a44221d","Type":"ContainerStarted","Data":"3e6b7fbd4360dd59305b5067fc8432310bbf734b1a8a2ede0ed6b2f438e52fc8"} Feb 21 08:30:01 crc kubenswrapper[4820]: I0221 08:30:01.289825 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" podStartSLOduration=1.289805748 podStartE2EDuration="1.289805748s" podCreationTimestamp="2026-02-21 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:30:01.274159566 +0000 UTC m=+6176.307243764" watchObservedRunningTime="2026-02-21 08:30:01.289805748 +0000 UTC m=+6176.322889946" Feb 21 08:30:02 crc kubenswrapper[4820]: I0221 08:30:02.247037 4820 generic.go:334] "Generic (PLEG): container finished" podID="b7930cbc-54a2-4fed-8153-27bb0a44221d" containerID="bedec9e828a462a9d7d9e96d01cf5a9452a72b80e424a2bc7656e332167d5caf" exitCode=0 Feb 21 08:30:02 crc kubenswrapper[4820]: I0221 08:30:02.247136 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" event={"ID":"b7930cbc-54a2-4fed-8153-27bb0a44221d","Type":"ContainerDied","Data":"bedec9e828a462a9d7d9e96d01cf5a9452a72b80e424a2bc7656e332167d5caf"} Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.632683 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.768221 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7930cbc-54a2-4fed-8153-27bb0a44221d-config-volume\") pod \"b7930cbc-54a2-4fed-8153-27bb0a44221d\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.768399 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bnms\" (UniqueName: \"kubernetes.io/projected/b7930cbc-54a2-4fed-8153-27bb0a44221d-kube-api-access-7bnms\") pod \"b7930cbc-54a2-4fed-8153-27bb0a44221d\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.768636 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7930cbc-54a2-4fed-8153-27bb0a44221d-secret-volume\") pod \"b7930cbc-54a2-4fed-8153-27bb0a44221d\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.769424 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7930cbc-54a2-4fed-8153-27bb0a44221d-config-volume" (OuterVolumeSpecName: "config-volume") pod "b7930cbc-54a2-4fed-8153-27bb0a44221d" (UID: "b7930cbc-54a2-4fed-8153-27bb0a44221d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.770433 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7930cbc-54a2-4fed-8153-27bb0a44221d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.775911 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7930cbc-54a2-4fed-8153-27bb0a44221d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b7930cbc-54a2-4fed-8153-27bb0a44221d" (UID: "b7930cbc-54a2-4fed-8153-27bb0a44221d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.776435 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7930cbc-54a2-4fed-8153-27bb0a44221d-kube-api-access-7bnms" (OuterVolumeSpecName: "kube-api-access-7bnms") pod "b7930cbc-54a2-4fed-8153-27bb0a44221d" (UID: "b7930cbc-54a2-4fed-8153-27bb0a44221d"). InnerVolumeSpecName "kube-api-access-7bnms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.874844 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bnms\" (UniqueName: \"kubernetes.io/projected/b7930cbc-54a2-4fed-8153-27bb0a44221d-kube-api-access-7bnms\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.874891 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7930cbc-54a2-4fed-8153-27bb0a44221d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:04 crc kubenswrapper[4820]: I0221 08:30:04.268287 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:04 crc kubenswrapper[4820]: I0221 08:30:04.268285 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" event={"ID":"b7930cbc-54a2-4fed-8153-27bb0a44221d","Type":"ContainerDied","Data":"3e6b7fbd4360dd59305b5067fc8432310bbf734b1a8a2ede0ed6b2f438e52fc8"} Feb 21 08:30:04 crc kubenswrapper[4820]: I0221 08:30:04.268417 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e6b7fbd4360dd59305b5067fc8432310bbf734b1a8a2ede0ed6b2f438e52fc8" Feb 21 08:30:04 crc kubenswrapper[4820]: I0221 08:30:04.712684 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb"] Feb 21 08:30:04 crc kubenswrapper[4820]: I0221 08:30:04.722918 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb"] Feb 21 08:30:05 crc kubenswrapper[4820]: I0221 08:30:05.710409 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc522f8d-0981-40c6-a17f-c5517c78a9cd" path="/var/lib/kubelet/pods/bc522f8d-0981-40c6-a17f-c5517c78a9cd/volumes" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.620582 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c5cfb7c-qgqzk"] Feb 21 08:30:06 crc kubenswrapper[4820]: E0221 08:30:06.621115 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7930cbc-54a2-4fed-8153-27bb0a44221d" containerName="collect-profiles" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.621134 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7930cbc-54a2-4fed-8153-27bb0a44221d" containerName="collect-profiles" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.621433 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7930cbc-54a2-4fed-8153-27bb0a44221d" containerName="collect-profiles" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.622758 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.625849 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.634767 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c5cfb7c-qgqzk"] Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.732750 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-dns-svc\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.732847 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-nb\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.732877 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-sb\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.733233 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-752kx\" (UniqueName: \"kubernetes.io/projected/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-kube-api-access-752kx\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.733368 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-config\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.733426 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-openstack-cell1\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.835440 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-dns-svc\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.835613 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-nb\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.835660 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-sb\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.835956 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-752kx\" (UniqueName: \"kubernetes.io/projected/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-kube-api-access-752kx\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.835999 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-config\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.836020 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-openstack-cell1\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.836707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-openstack-cell1\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.836734 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-dns-svc\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.837321 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-config\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.837729 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-nb\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.837753 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-sb\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.865959 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-752kx\" (UniqueName: \"kubernetes.io/projected/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-kube-api-access-752kx\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.941042 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:07 crc kubenswrapper[4820]: I0221 08:30:07.458447 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c5cfb7c-qgqzk"] Feb 21 08:30:08 crc kubenswrapper[4820]: I0221 08:30:08.305405 4820 generic.go:334] "Generic (PLEG): container finished" podID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerID="748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff" exitCode=0 Feb 21 08:30:08 crc kubenswrapper[4820]: I0221 08:30:08.305906 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" event={"ID":"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca","Type":"ContainerDied","Data":"748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff"} Feb 21 08:30:08 crc kubenswrapper[4820]: I0221 08:30:08.306832 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" event={"ID":"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca","Type":"ContainerStarted","Data":"dab5f24bf00cb2fd5017ef99f29db8355c02558ab97cb6a0a73e353ab2a7ff13"} Feb 21 08:30:09 crc kubenswrapper[4820]: I0221 08:30:09.316792 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" event={"ID":"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca","Type":"ContainerStarted","Data":"e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48"} Feb 21 08:30:09 crc kubenswrapper[4820]: I0221 08:30:09.317069 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:09 crc kubenswrapper[4820]: I0221 08:30:09.347173 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" podStartSLOduration=3.347143578 podStartE2EDuration="3.347143578s" podCreationTimestamp="2026-02-21 08:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:30:09.336706756 +0000 UTC m=+6184.369790954" watchObservedRunningTime="2026-02-21 08:30:09.347143578 +0000 UTC m=+6184.380227776" Feb 21 08:30:09 crc kubenswrapper[4820]: I0221 08:30:09.696933 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:30:09 crc kubenswrapper[4820]: E0221 08:30:09.697182 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:30:16 crc kubenswrapper[4820]: I0221 08:30:16.943317 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.015036 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64b58db4ff-kq4r6"] Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.015338 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerName="dnsmasq-dns" containerID="cri-o://0a4720267f768f28f7e592e7fa4dcfc42e1fbbe5a9ed8b90b1f97ebb0060eaf8" gracePeriod=10 Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.156617 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6dfc499f-dvr9b"] Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.158973 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.181792 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6dfc499f-dvr9b"] Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.211215 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-dns-svc\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.211289 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtjvk\" (UniqueName: \"kubernetes.io/projected/6c431de9-6c4a-4279-a63a-bd6742fc68f0-kube-api-access-xtjvk\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.211378 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-config\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.211488 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.211511 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-openstack-cell1\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.211548 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.280697 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.98:5353: connect: connection refused" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.313292 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-openstack-cell1\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.313367 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.313448 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-dns-svc\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.313472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtjvk\" (UniqueName: \"kubernetes.io/projected/6c431de9-6c4a-4279-a63a-bd6742fc68f0-kube-api-access-xtjvk\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.313513 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-config\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.313618 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.314463 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.314822 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-openstack-cell1\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.315034 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-dns-svc\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.315425 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-config\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.315570 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.342618 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtjvk\" (UniqueName: \"kubernetes.io/projected/6c431de9-6c4a-4279-a63a-bd6742fc68f0-kube-api-access-xtjvk\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.390760 4820 generic.go:334] "Generic (PLEG): container finished" podID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerID="0a4720267f768f28f7e592e7fa4dcfc42e1fbbe5a9ed8b90b1f97ebb0060eaf8" exitCode=0 Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.390811 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" event={"ID":"d22b75bc-f9ca-4b8f-ae95-5d348d367d56","Type":"ContainerDied","Data":"0a4720267f768f28f7e592e7fa4dcfc42e1fbbe5a9ed8b90b1f97ebb0060eaf8"} Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.536566 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.689062 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.842113 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-nb\") pod \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.842179 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-dns-svc\") pod \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.842301 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-sb\") pod \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.842338 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lr4f\" (UniqueName: \"kubernetes.io/projected/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-kube-api-access-6lr4f\") pod \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.842382 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-config\") pod \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.867878 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-kube-api-access-6lr4f" (OuterVolumeSpecName: "kube-api-access-6lr4f") pod "d22b75bc-f9ca-4b8f-ae95-5d348d367d56" (UID: "d22b75bc-f9ca-4b8f-ae95-5d348d367d56"). InnerVolumeSpecName "kube-api-access-6lr4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.906157 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d22b75bc-f9ca-4b8f-ae95-5d348d367d56" (UID: "d22b75bc-f9ca-4b8f-ae95-5d348d367d56"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.907123 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-config" (OuterVolumeSpecName: "config") pod "d22b75bc-f9ca-4b8f-ae95-5d348d367d56" (UID: "d22b75bc-f9ca-4b8f-ae95-5d348d367d56"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.912364 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d22b75bc-f9ca-4b8f-ae95-5d348d367d56" (UID: "d22b75bc-f9ca-4b8f-ae95-5d348d367d56"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.914389 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d22b75bc-f9ca-4b8f-ae95-5d348d367d56" (UID: "d22b75bc-f9ca-4b8f-ae95-5d348d367d56"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.944275 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.944325 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.944337 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.944350 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lr4f\" (UniqueName: \"kubernetes.io/projected/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-kube-api-access-6lr4f\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.944366 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:18 crc kubenswrapper[4820]: W0221 08:30:18.049016 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c431de9_6c4a_4279_a63a_bd6742fc68f0.slice/crio-8307265b07a6a881b691716ab60e7476d6ece76bad7378a692d664bf88b949d9 WatchSource:0}: Error finding container 8307265b07a6a881b691716ab60e7476d6ece76bad7378a692d664bf88b949d9: Status 404 returned error can't find the container with id 8307265b07a6a881b691716ab60e7476d6ece76bad7378a692d664bf88b949d9 Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.053856 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6dfc499f-dvr9b"] Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.407317 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" event={"ID":"6c431de9-6c4a-4279-a63a-bd6742fc68f0","Type":"ContainerStarted","Data":"4aec6550f253350b45648012f31f58129252e80b8f4a077d2ae4a04253b9a5a2"} Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.407621 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" event={"ID":"6c431de9-6c4a-4279-a63a-bd6742fc68f0","Type":"ContainerStarted","Data":"8307265b07a6a881b691716ab60e7476d6ece76bad7378a692d664bf88b949d9"} Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.411125 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" event={"ID":"d22b75bc-f9ca-4b8f-ae95-5d348d367d56","Type":"ContainerDied","Data":"d67f845d3717911b1815a01ec1fd7dc0df11dc2b02acfc8a168dc3d28d255825"} Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.411192 4820 scope.go:117] "RemoveContainer" containerID="0a4720267f768f28f7e592e7fa4dcfc42e1fbbe5a9ed8b90b1f97ebb0060eaf8" Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.411216 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.471333 4820 scope.go:117] "RemoveContainer" containerID="f215d8f5dd859dfa673e3e2892b1a89b1627e9a6ac4059705534b7571162daeb" Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.478391 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64b58db4ff-kq4r6"] Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.484131 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64b58db4ff-kq4r6"] Feb 21 08:30:19 crc kubenswrapper[4820]: I0221 08:30:19.446498 4820 generic.go:334] "Generic (PLEG): container finished" podID="6c431de9-6c4a-4279-a63a-bd6742fc68f0" containerID="4aec6550f253350b45648012f31f58129252e80b8f4a077d2ae4a04253b9a5a2" exitCode=0 Feb 21 08:30:19 crc kubenswrapper[4820]: I0221 08:30:19.446546 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" event={"ID":"6c431de9-6c4a-4279-a63a-bd6742fc68f0","Type":"ContainerDied","Data":"4aec6550f253350b45648012f31f58129252e80b8f4a077d2ae4a04253b9a5a2"} Feb 21 08:30:19 crc kubenswrapper[4820]: I0221 08:30:19.709903 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" path="/var/lib/kubelet/pods/d22b75bc-f9ca-4b8f-ae95-5d348d367d56/volumes" Feb 21 08:30:20 crc kubenswrapper[4820]: I0221 08:30:20.458993 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" event={"ID":"6c431de9-6c4a-4279-a63a-bd6742fc68f0","Type":"ContainerStarted","Data":"bfa8083802a756ad9f9e1dd40034460c68ee5ba8c8c0395850ca2b68518651b7"} Feb 21 08:30:20 crc kubenswrapper[4820]: I0221 08:30:20.459938 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:20 crc kubenswrapper[4820]: I0221 08:30:20.489443 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" podStartSLOduration=3.489425698 podStartE2EDuration="3.489425698s" podCreationTimestamp="2026-02-21 08:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:30:20.479301565 +0000 UTC m=+6195.512385773" watchObservedRunningTime="2026-02-21 08:30:20.489425698 +0000 UTC m=+6195.522509896" Feb 21 08:30:22 crc kubenswrapper[4820]: I0221 08:30:22.697415 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:30:22 crc kubenswrapper[4820]: E0221 08:30:22.698002 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:30:27 crc kubenswrapper[4820]: I0221 08:30:27.539229 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:27 crc kubenswrapper[4820]: I0221 08:30:27.613496 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c5cfb7c-qgqzk"] Feb 21 08:30:27 crc kubenswrapper[4820]: I0221 08:30:27.613781 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" podUID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerName="dnsmasq-dns" containerID="cri-o://e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48" gracePeriod=10 Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.212032 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.284932 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-752kx\" (UniqueName: \"kubernetes.io/projected/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-kube-api-access-752kx\") pod \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.285076 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-nb\") pod \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.285163 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-openstack-cell1\") pod \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.285263 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-config\") pod \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.285311 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-dns-svc\") pod \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.285331 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-sb\") pod \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.306759 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-kube-api-access-752kx" (OuterVolumeSpecName: "kube-api-access-752kx") pod "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" (UID: "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca"). InnerVolumeSpecName "kube-api-access-752kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.339053 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" (UID: "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.339739 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" (UID: "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.339965 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-config" (OuterVolumeSpecName: "config") pod "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" (UID: "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.347117 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" (UID: "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.364543 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" (UID: "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.388090 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-752kx\" (UniqueName: \"kubernetes.io/projected/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-kube-api-access-752kx\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.388119 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.388128 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.388138 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.388148 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.388156 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.541351 4820 generic.go:334] "Generic (PLEG): container finished" podID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerID="e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48" exitCode=0 Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.541413 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.541435 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" event={"ID":"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca","Type":"ContainerDied","Data":"e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48"} Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.541475 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" event={"ID":"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca","Type":"ContainerDied","Data":"dab5f24bf00cb2fd5017ef99f29db8355c02558ab97cb6a0a73e353ab2a7ff13"} Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.541490 4820 scope.go:117] "RemoveContainer" containerID="e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.576585 4820 scope.go:117] "RemoveContainer" containerID="748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.618741 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c5cfb7c-qgqzk"] Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.625028 4820 scope.go:117] "RemoveContainer" containerID="e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48" Feb 21 08:30:28 crc kubenswrapper[4820]: E0221 08:30:28.625970 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48\": container with ID starting with e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48 not found: ID does not exist" containerID="e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.626009 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48"} err="failed to get container status \"e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48\": rpc error: code = NotFound desc = could not find container \"e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48\": container with ID starting with e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48 not found: ID does not exist" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.626029 4820 scope.go:117] "RemoveContainer" containerID="748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff" Feb 21 08:30:28 crc kubenswrapper[4820]: E0221 08:30:28.627497 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff\": container with ID starting with 748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff not found: ID does not exist" containerID="748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.627523 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff"} err="failed to get container status \"748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff\": rpc error: code = NotFound desc = could not find container \"748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff\": container with ID starting with 748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff not found: ID does not exist" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.648419 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c5cfb7c-qgqzk"] Feb 21 08:30:29 crc kubenswrapper[4820]: I0221 08:30:29.710186 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" path="/var/lib/kubelet/pods/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca/volumes" Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.045014 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9cdf-account-create-update-r2dfp"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.056854 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-rllks"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.069400 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7934-account-create-update-tq229"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.087228 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-cszw4"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.098401 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-cszw4"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.106510 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-48s57"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.114698 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9237-account-create-update-4lj2f"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.124210 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-rllks"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.133366 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9cdf-account-create-update-r2dfp"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.141370 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9237-account-create-update-4lj2f"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.149353 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7934-account-create-update-tq229"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.157603 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-48s57"] Feb 21 08:30:31 crc kubenswrapper[4820]: I0221 08:30:31.709214 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10066581-0763-4940-bcba-cdd983819ef7" path="/var/lib/kubelet/pods/10066581-0763-4940-bcba-cdd983819ef7/volumes" Feb 21 08:30:31 crc kubenswrapper[4820]: I0221 08:30:31.709946 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a418ce3-1a88-442d-9c0a-3aea9ad0cc51" path="/var/lib/kubelet/pods/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51/volumes" Feb 21 08:30:31 crc kubenswrapper[4820]: I0221 08:30:31.710549 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="245926d7-e415-4af9-b793-9546bb73dc0c" path="/var/lib/kubelet/pods/245926d7-e415-4af9-b793-9546bb73dc0c/volumes" Feb 21 08:30:31 crc kubenswrapper[4820]: I0221 08:30:31.711102 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b" path="/var/lib/kubelet/pods/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b/volumes" Feb 21 08:30:31 crc kubenswrapper[4820]: I0221 08:30:31.712207 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96717fc4-053b-4426-ab50-dc0786c2eb7e" path="/var/lib/kubelet/pods/96717fc4-053b-4426-ab50-dc0786c2eb7e/volumes" Feb 21 08:30:31 crc kubenswrapper[4820]: I0221 08:30:31.712839 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e47106ba-9033-418d-a248-6f7ee03d05e6" path="/var/lib/kubelet/pods/e47106ba-9033-418d-a248-6f7ee03d05e6/volumes" Feb 21 08:30:34 crc kubenswrapper[4820]: I0221 08:30:34.568525 4820 scope.go:117] "RemoveContainer" containerID="4752965fe12233721da16be2026cb8f90d08c2deaae354b54d275686b6e0952f" Feb 21 08:30:34 crc kubenswrapper[4820]: I0221 08:30:34.615388 4820 scope.go:117] "RemoveContainer" containerID="7fef589dd234562a1f8ed9fdd1d4bca07d4fd2cbf607d93270b0548c9a879418" Feb 21 08:30:34 crc kubenswrapper[4820]: I0221 08:30:34.676704 4820 scope.go:117] "RemoveContainer" containerID="596a2e41ee647dbd1d667628c46432c71a17e9b1604655abed8696d3d2255d8e" Feb 21 08:30:34 crc kubenswrapper[4820]: I0221 08:30:34.722542 4820 scope.go:117] "RemoveContainer" containerID="d2cad300294ab354787d808751187ff2212790e752b7fb9cb18149cc806b0681" Feb 21 08:30:34 crc kubenswrapper[4820]: I0221 08:30:34.786267 4820 scope.go:117] "RemoveContainer" containerID="5026a57c2b358309b7948ddf106308e40b701e9677338916048733307f4310bc" Feb 21 08:30:34 crc kubenswrapper[4820]: I0221 08:30:34.860273 4820 scope.go:117] "RemoveContainer" containerID="0fea29e38ddb40995e5831792abda163aa5514fd473324369df5f3b8327ea829" Feb 21 08:30:34 crc kubenswrapper[4820]: I0221 08:30:34.919563 4820 scope.go:117] "RemoveContainer" containerID="112dd10479e3747f08f12ee8430488451d124d8475edfb2fee1ed65fd14153d8" Feb 21 08:30:36 crc kubenswrapper[4820]: I0221 08:30:36.697030 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:30:36 crc kubenswrapper[4820]: E0221 08:30:36.697854 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.908036 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n"] Feb 21 08:30:37 crc kubenswrapper[4820]: E0221 08:30:37.909224 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerName="dnsmasq-dns" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.909351 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerName="dnsmasq-dns" Feb 21 08:30:37 crc kubenswrapper[4820]: E0221 08:30:37.909431 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerName="dnsmasq-dns" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.909517 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerName="dnsmasq-dns" Feb 21 08:30:37 crc kubenswrapper[4820]: E0221 08:30:37.909615 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerName="init" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.909684 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerName="init" Feb 21 08:30:37 crc kubenswrapper[4820]: E0221 08:30:37.909750 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerName="init" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.909810 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerName="init" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.910045 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerName="dnsmasq-dns" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.910106 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerName="dnsmasq-dns" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.910907 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.916203 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.916206 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.916907 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n"] Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.917880 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.918033 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.991929 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.992008 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.992068 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kgd8\" (UniqueName: \"kubernetes.io/projected/d5f7b8c5-1ad0-4d18-bf56-89197679507f-kube-api-access-8kgd8\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.992095 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.093698 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.093768 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.093819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kgd8\" (UniqueName: \"kubernetes.io/projected/d5f7b8c5-1ad0-4d18-bf56-89197679507f-kube-api-access-8kgd8\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.093841 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.100490 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.101717 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.112851 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.131745 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kgd8\" (UniqueName: \"kubernetes.io/projected/d5f7b8c5-1ad0-4d18-bf56-89197679507f-kube-api-access-8kgd8\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.229445 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.850631 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n"] Feb 21 08:30:38 crc kubenswrapper[4820]: W0221 08:30:38.856290 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5f7b8c5_1ad0_4d18_bf56_89197679507f.slice/crio-3d7c555e7302e3e403cf34f2a5abb17847652b9341f0bf535d5bb4907dc1c37e WatchSource:0}: Error finding container 3d7c555e7302e3e403cf34f2a5abb17847652b9341f0bf535d5bb4907dc1c37e: Status 404 returned error can't find the container with id 3d7c555e7302e3e403cf34f2a5abb17847652b9341f0bf535d5bb4907dc1c37e Feb 21 08:30:39 crc kubenswrapper[4820]: I0221 08:30:39.678018 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" event={"ID":"d5f7b8c5-1ad0-4d18-bf56-89197679507f","Type":"ContainerStarted","Data":"3d7c555e7302e3e403cf34f2a5abb17847652b9341f0bf535d5bb4907dc1c37e"} Feb 21 08:30:47 crc kubenswrapper[4820]: I0221 08:30:47.697717 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:30:47 crc kubenswrapper[4820]: E0221 08:30:47.699015 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:30:48 crc kubenswrapper[4820]: I0221 08:30:48.766131 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" event={"ID":"d5f7b8c5-1ad0-4d18-bf56-89197679507f","Type":"ContainerStarted","Data":"e248ae992782d8fdf324b752580f72472acdc6b258d0648b1ae93d9c503903c9"} Feb 21 08:30:48 crc kubenswrapper[4820]: I0221 08:30:48.804900 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" podStartSLOduration=2.280308076 podStartE2EDuration="11.804880373s" podCreationTimestamp="2026-02-21 08:30:37 +0000 UTC" firstStartedPulling="2026-02-21 08:30:38.85929121 +0000 UTC m=+6213.892375408" lastFinishedPulling="2026-02-21 08:30:48.383863507 +0000 UTC m=+6223.416947705" observedRunningTime="2026-02-21 08:30:48.796982169 +0000 UTC m=+6223.830066367" watchObservedRunningTime="2026-02-21 08:30:48.804880373 +0000 UTC m=+6223.837964571" Feb 21 08:30:54 crc kubenswrapper[4820]: I0221 08:30:54.052363 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kjc5t"] Feb 21 08:30:54 crc kubenswrapper[4820]: I0221 08:30:54.070479 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kjc5t"] Feb 21 08:30:55 crc kubenswrapper[4820]: I0221 08:30:55.711503 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ae13708-c06f-4967-901f-8ea42fdca38c" path="/var/lib/kubelet/pods/2ae13708-c06f-4967-901f-8ea42fdca38c/volumes" Feb 21 08:31:00 crc kubenswrapper[4820]: I0221 08:31:00.697791 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:31:00 crc kubenswrapper[4820]: E0221 08:31:00.698794 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:31:01 crc kubenswrapper[4820]: I0221 08:31:01.886811 4820 generic.go:334] "Generic (PLEG): container finished" podID="d5f7b8c5-1ad0-4d18-bf56-89197679507f" containerID="e248ae992782d8fdf324b752580f72472acdc6b258d0648b1ae93d9c503903c9" exitCode=0 Feb 21 08:31:01 crc kubenswrapper[4820]: I0221 08:31:01.886871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" event={"ID":"d5f7b8c5-1ad0-4d18-bf56-89197679507f","Type":"ContainerDied","Data":"e248ae992782d8fdf324b752580f72472acdc6b258d0648b1ae93d9c503903c9"} Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.348375 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.448002 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-inventory\") pod \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.448266 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kgd8\" (UniqueName: \"kubernetes.io/projected/d5f7b8c5-1ad0-4d18-bf56-89197679507f-kube-api-access-8kgd8\") pod \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.448342 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-ssh-key-openstack-cell1\") pod \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.448379 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-pre-adoption-validation-combined-ca-bundle\") pod \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.462128 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f7b8c5-1ad0-4d18-bf56-89197679507f-kube-api-access-8kgd8" (OuterVolumeSpecName: "kube-api-access-8kgd8") pod "d5f7b8c5-1ad0-4d18-bf56-89197679507f" (UID: "d5f7b8c5-1ad0-4d18-bf56-89197679507f"). InnerVolumeSpecName "kube-api-access-8kgd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.462355 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "d5f7b8c5-1ad0-4d18-bf56-89197679507f" (UID: "d5f7b8c5-1ad0-4d18-bf56-89197679507f"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.485228 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-inventory" (OuterVolumeSpecName: "inventory") pod "d5f7b8c5-1ad0-4d18-bf56-89197679507f" (UID: "d5f7b8c5-1ad0-4d18-bf56-89197679507f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.485899 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "d5f7b8c5-1ad0-4d18-bf56-89197679507f" (UID: "d5f7b8c5-1ad0-4d18-bf56-89197679507f"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.550737 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kgd8\" (UniqueName: \"kubernetes.io/projected/d5f7b8c5-1ad0-4d18-bf56-89197679507f-kube-api-access-8kgd8\") on node \"crc\" DevicePath \"\"" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.550775 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.550787 4820 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.550800 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.902993 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" event={"ID":"d5f7b8c5-1ad0-4d18-bf56-89197679507f","Type":"ContainerDied","Data":"3d7c555e7302e3e403cf34f2a5abb17847652b9341f0bf535d5bb4907dc1c37e"} Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.903038 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d7c555e7302e3e403cf34f2a5abb17847652b9341f0bf535d5bb4907dc1c37e" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.903048 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.527989 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk"] Feb 21 08:31:10 crc kubenswrapper[4820]: E0221 08:31:10.528927 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f7b8c5-1ad0-4d18-bf56-89197679507f" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.528943 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f7b8c5-1ad0-4d18-bf56-89197679507f" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.529168 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f7b8c5-1ad0-4d18-bf56-89197679507f" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.529972 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.532353 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.533026 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.533203 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.533603 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.548478 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk"] Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.711401 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.711655 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxkx8\" (UniqueName: \"kubernetes.io/projected/8acec915-5e23-4212-9bce-50fec475c433-kube-api-access-zxkx8\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.711697 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.711726 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.813978 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.814173 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxkx8\" (UniqueName: \"kubernetes.io/projected/8acec915-5e23-4212-9bce-50fec475c433-kube-api-access-zxkx8\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.814218 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.814266 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.820417 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.820758 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.824389 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.831421 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxkx8\" (UniqueName: \"kubernetes.io/projected/8acec915-5e23-4212-9bce-50fec475c433-kube-api-access-zxkx8\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.849061 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:11 crc kubenswrapper[4820]: I0221 08:31:11.448254 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk"] Feb 21 08:31:11 crc kubenswrapper[4820]: I0221 08:31:11.975313 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" event={"ID":"8acec915-5e23-4212-9bce-50fec475c433","Type":"ContainerStarted","Data":"bfde1cc6b595c74e965cbaa1483573efc5b20ea19714eb3b49b85d75603b542d"} Feb 21 08:31:12 crc kubenswrapper[4820]: I0221 08:31:12.991513 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" event={"ID":"8acec915-5e23-4212-9bce-50fec475c433","Type":"ContainerStarted","Data":"ec1a4c393a9121270be39171bf2da08c8a063040bae700684bfcf3b9d8f4d3c2"} Feb 21 08:31:13 crc kubenswrapper[4820]: I0221 08:31:13.022806 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" podStartSLOduration=2.24938692 podStartE2EDuration="3.0227767s" podCreationTimestamp="2026-02-21 08:31:10 +0000 UTC" firstStartedPulling="2026-02-21 08:31:11.459363621 +0000 UTC m=+6246.492447819" lastFinishedPulling="2026-02-21 08:31:12.232753401 +0000 UTC m=+6247.265837599" observedRunningTime="2026-02-21 08:31:13.010098248 +0000 UTC m=+6248.043182476" watchObservedRunningTime="2026-02-21 08:31:13.0227767 +0000 UTC m=+6248.055860898" Feb 21 08:31:14 crc kubenswrapper[4820]: I0221 08:31:14.042510 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-lwzsj"] Feb 21 08:31:14 crc kubenswrapper[4820]: I0221 08:31:14.052093 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-lwzsj"] Feb 21 08:31:14 crc kubenswrapper[4820]: I0221 08:31:14.696685 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:31:14 crc kubenswrapper[4820]: E0221 08:31:14.697114 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:31:15 crc kubenswrapper[4820]: I0221 08:31:15.029663 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wf76m"] Feb 21 08:31:15 crc kubenswrapper[4820]: I0221 08:31:15.039867 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wf76m"] Feb 21 08:31:15 crc kubenswrapper[4820]: I0221 08:31:15.730931 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36ace6b1-75c4-451e-b167-1dbe9b2471ca" path="/var/lib/kubelet/pods/36ace6b1-75c4-451e-b167-1dbe9b2471ca/volumes" Feb 21 08:31:15 crc kubenswrapper[4820]: I0221 08:31:15.731517 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52c86e8d-fde8-46e2-856f-10b3444f1ed7" path="/var/lib/kubelet/pods/52c86e8d-fde8-46e2-856f-10b3444f1ed7/volumes" Feb 21 08:31:25 crc kubenswrapper[4820]: I0221 08:31:25.697456 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:31:25 crc kubenswrapper[4820]: E0221 08:31:25.701410 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:31:35 crc kubenswrapper[4820]: I0221 08:31:35.159894 4820 scope.go:117] "RemoveContainer" containerID="bf29bba5483173a5926ca22d4373cc490219d20fafdd835f34ed3749087c8610" Feb 21 08:31:35 crc kubenswrapper[4820]: I0221 08:31:35.192061 4820 scope.go:117] "RemoveContainer" containerID="401aa1cc9b63be74ac5d6945ba27a6f816214705ac3c1915809f5508ba44aa76" Feb 21 08:31:35 crc kubenswrapper[4820]: I0221 08:31:35.258229 4820 scope.go:117] "RemoveContainer" containerID="cb3f4ce0b0215a0db2f78f709a8d3c26d681a5c2f85f5e3e4402255224c51737" Feb 21 08:31:35 crc kubenswrapper[4820]: I0221 08:31:35.314585 4820 scope.go:117] "RemoveContainer" containerID="a7ecd295ca0eafe15872121cc6b4a13c28ba3248d670e0f6c5e46ff6c31cdd60" Feb 21 08:31:39 crc kubenswrapper[4820]: I0221 08:31:39.696817 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:31:39 crc kubenswrapper[4820]: E0221 08:31:39.697436 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:31:52 crc kubenswrapper[4820]: I0221 08:31:52.697435 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:31:52 crc kubenswrapper[4820]: E0221 08:31:52.698959 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:32:01 crc kubenswrapper[4820]: I0221 08:32:01.040383 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5x9p7"] Feb 21 08:32:01 crc kubenswrapper[4820]: I0221 08:32:01.048821 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5x9p7"] Feb 21 08:32:01 crc kubenswrapper[4820]: I0221 08:32:01.710002 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f525d5cb-a9d6-4121-bf15-1e7af7974e4f" path="/var/lib/kubelet/pods/f525d5cb-a9d6-4121-bf15-1e7af7974e4f/volumes" Feb 21 08:32:04 crc kubenswrapper[4820]: I0221 08:32:04.696935 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:32:04 crc kubenswrapper[4820]: E0221 08:32:04.697523 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:32:15 crc kubenswrapper[4820]: I0221 08:32:15.704865 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:32:16 crc kubenswrapper[4820]: I0221 08:32:16.677818 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"9c0f51850c3ee976a89c7e9ab65d7e1bbc5ca6ad0bc6054c44f678cb78a80885"} Feb 21 08:32:35 crc kubenswrapper[4820]: I0221 08:32:35.451168 4820 scope.go:117] "RemoveContainer" containerID="34a4e1cb1b83b0c97801cf2ba65b4150edc304d737f6d6fdb49f999d85a21849" Feb 21 08:34:42 crc kubenswrapper[4820]: I0221 08:34:42.039260 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-s4h7q"] Feb 21 08:34:42 crc kubenswrapper[4820]: I0221 08:34:42.048071 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-029a-account-create-update-bm98m"] Feb 21 08:34:42 crc kubenswrapper[4820]: I0221 08:34:42.057800 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-s4h7q"] Feb 21 08:34:42 crc kubenswrapper[4820]: I0221 08:34:42.066188 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-029a-account-create-update-bm98m"] Feb 21 08:34:43 crc kubenswrapper[4820]: I0221 08:34:43.711196 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84358593-717e-4372-b9bb-28a34fb65b6e" path="/var/lib/kubelet/pods/84358593-717e-4372-b9bb-28a34fb65b6e/volumes" Feb 21 08:34:43 crc kubenswrapper[4820]: I0221 08:34:43.712083 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d69513ef-06f3-4770-9e89-5b7b7fe873b2" path="/var/lib/kubelet/pods/d69513ef-06f3-4770-9e89-5b7b7fe873b2/volumes" Feb 21 08:34:43 crc kubenswrapper[4820]: I0221 08:34:43.815927 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:34:43 crc kubenswrapper[4820]: I0221 08:34:43.815981 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:34:58 crc kubenswrapper[4820]: I0221 08:34:58.029514 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-27sgb"] Feb 21 08:34:58 crc kubenswrapper[4820]: I0221 08:34:58.044287 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-27sgb"] Feb 21 08:34:59 crc kubenswrapper[4820]: I0221 08:34:59.708623 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="898015a2-3ff9-4c61-b164-4a6961c44884" path="/var/lib/kubelet/pods/898015a2-3ff9-4c61-b164-4a6961c44884/volumes" Feb 21 08:35:13 crc kubenswrapper[4820]: I0221 08:35:13.815825 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:35:13 crc kubenswrapper[4820]: I0221 08:35:13.816423 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:35:35 crc kubenswrapper[4820]: I0221 08:35:35.578412 4820 scope.go:117] "RemoveContainer" containerID="68774d2f4de18b7806f40ee1b0b156252a789383fdca19150a9a891e3ca19dd7" Feb 21 08:35:35 crc kubenswrapper[4820]: I0221 08:35:35.619912 4820 scope.go:117] "RemoveContainer" containerID="14af9ba959135f7ccb7c53b58530a4f859881a49edc0cec93b0e45e191a3c245" Feb 21 08:35:35 crc kubenswrapper[4820]: I0221 08:35:35.665744 4820 scope.go:117] "RemoveContainer" containerID="a12df1c2f01a52b23e3ee09bfc109790a329f88bd152cdf89529c2311ee4b560" Feb 21 08:35:43 crc kubenswrapper[4820]: I0221 08:35:43.816646 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:35:43 crc kubenswrapper[4820]: I0221 08:35:43.818723 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:35:43 crc kubenswrapper[4820]: I0221 08:35:43.818916 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:35:43 crc kubenswrapper[4820]: I0221 08:35:43.820016 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c0f51850c3ee976a89c7e9ab65d7e1bbc5ca6ad0bc6054c44f678cb78a80885"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:35:43 crc kubenswrapper[4820]: I0221 08:35:43.820229 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://9c0f51850c3ee976a89c7e9ab65d7e1bbc5ca6ad0bc6054c44f678cb78a80885" gracePeriod=600 Feb 21 08:35:44 crc kubenswrapper[4820]: I0221 08:35:44.524758 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="9c0f51850c3ee976a89c7e9ab65d7e1bbc5ca6ad0bc6054c44f678cb78a80885" exitCode=0 Feb 21 08:35:44 crc kubenswrapper[4820]: I0221 08:35:44.524835 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"9c0f51850c3ee976a89c7e9ab65d7e1bbc5ca6ad0bc6054c44f678cb78a80885"} Feb 21 08:35:44 crc kubenswrapper[4820]: I0221 08:35:44.525134 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e"} Feb 21 08:35:44 crc kubenswrapper[4820]: I0221 08:35:44.525163 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.295303 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8fpfw"] Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.299589 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.320486 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8fpfw"] Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.458193 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-utilities\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.458646 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-catalog-content\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.459077 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz9qn\" (UniqueName: \"kubernetes.io/projected/87e227d7-07f4-4f82-9a8f-0527ec367368-kube-api-access-sz9qn\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.561423 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz9qn\" (UniqueName: \"kubernetes.io/projected/87e227d7-07f4-4f82-9a8f-0527ec367368-kube-api-access-sz9qn\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.561478 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-utilities\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.561544 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-catalog-content\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.562041 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-catalog-content\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.562204 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-utilities\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.584353 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz9qn\" (UniqueName: \"kubernetes.io/projected/87e227d7-07f4-4f82-9a8f-0527ec367368-kube-api-access-sz9qn\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.654454 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:51 crc kubenswrapper[4820]: I0221 08:37:51.144744 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8fpfw"] Feb 21 08:37:51 crc kubenswrapper[4820]: I0221 08:37:51.787863 4820 generic.go:334] "Generic (PLEG): container finished" podID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerID="34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367" exitCode=0 Feb 21 08:37:51 crc kubenswrapper[4820]: I0221 08:37:51.788148 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpfw" event={"ID":"87e227d7-07f4-4f82-9a8f-0527ec367368","Type":"ContainerDied","Data":"34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367"} Feb 21 08:37:51 crc kubenswrapper[4820]: I0221 08:37:51.788460 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpfw" event={"ID":"87e227d7-07f4-4f82-9a8f-0527ec367368","Type":"ContainerStarted","Data":"771767d9cd3d95583fa8ce6ec1ccfb0d4f2276dc7d02d52e89caa08934a3a98f"} Feb 21 08:37:51 crc kubenswrapper[4820]: I0221 08:37:51.792103 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:37:54 crc kubenswrapper[4820]: I0221 08:37:54.817469 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpfw" event={"ID":"87e227d7-07f4-4f82-9a8f-0527ec367368","Type":"ContainerStarted","Data":"437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f"} Feb 21 08:38:07 crc kubenswrapper[4820]: I0221 08:38:07.933337 4820 generic.go:334] "Generic (PLEG): container finished" podID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerID="437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f" exitCode=0 Feb 21 08:38:07 crc kubenswrapper[4820]: I0221 08:38:07.933853 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpfw" event={"ID":"87e227d7-07f4-4f82-9a8f-0527ec367368","Type":"ContainerDied","Data":"437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f"} Feb 21 08:38:09 crc kubenswrapper[4820]: I0221 08:38:09.952549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpfw" event={"ID":"87e227d7-07f4-4f82-9a8f-0527ec367368","Type":"ContainerStarted","Data":"511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60"} Feb 21 08:38:09 crc kubenswrapper[4820]: I0221 08:38:09.972974 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8fpfw" podStartSLOduration=2.9298510220000002 podStartE2EDuration="19.972956272s" podCreationTimestamp="2026-02-21 08:37:50 +0000 UTC" firstStartedPulling="2026-02-21 08:37:51.791786996 +0000 UTC m=+6646.824871194" lastFinishedPulling="2026-02-21 08:38:08.834892246 +0000 UTC m=+6663.867976444" observedRunningTime="2026-02-21 08:38:09.971535524 +0000 UTC m=+6665.004619742" watchObservedRunningTime="2026-02-21 08:38:09.972956272 +0000 UTC m=+6665.006040470" Feb 21 08:38:10 crc kubenswrapper[4820]: I0221 08:38:10.655101 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:38:10 crc kubenswrapper[4820]: I0221 08:38:10.655154 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:38:11 crc kubenswrapper[4820]: I0221 08:38:11.701051 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fpfw" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="registry-server" probeResult="failure" output=< Feb 21 08:38:11 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:38:11 crc kubenswrapper[4820]: > Feb 21 08:38:13 crc kubenswrapper[4820]: I0221 08:38:13.816259 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:38:13 crc kubenswrapper[4820]: I0221 08:38:13.817583 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:38:20 crc kubenswrapper[4820]: I0221 08:38:20.714913 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:38:20 crc kubenswrapper[4820]: I0221 08:38:20.769487 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:38:21 crc kubenswrapper[4820]: I0221 08:38:21.499082 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8fpfw"] Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.069975 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8fpfw" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="registry-server" containerID="cri-o://511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60" gracePeriod=2 Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.672781 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.682845 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-utilities\") pod \"87e227d7-07f4-4f82-9a8f-0527ec367368\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.683149 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-catalog-content\") pod \"87e227d7-07f4-4f82-9a8f-0527ec367368\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.683279 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz9qn\" (UniqueName: \"kubernetes.io/projected/87e227d7-07f4-4f82-9a8f-0527ec367368-kube-api-access-sz9qn\") pod \"87e227d7-07f4-4f82-9a8f-0527ec367368\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.683949 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-utilities" (OuterVolumeSpecName: "utilities") pod "87e227d7-07f4-4f82-9a8f-0527ec367368" (UID: "87e227d7-07f4-4f82-9a8f-0527ec367368"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.684290 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.694945 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e227d7-07f4-4f82-9a8f-0527ec367368-kube-api-access-sz9qn" (OuterVolumeSpecName: "kube-api-access-sz9qn") pod "87e227d7-07f4-4f82-9a8f-0527ec367368" (UID: "87e227d7-07f4-4f82-9a8f-0527ec367368"). InnerVolumeSpecName "kube-api-access-sz9qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.785458 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz9qn\" (UniqueName: \"kubernetes.io/projected/87e227d7-07f4-4f82-9a8f-0527ec367368-kube-api-access-sz9qn\") on node \"crc\" DevicePath \"\"" Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.826211 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87e227d7-07f4-4f82-9a8f-0527ec367368" (UID: "87e227d7-07f4-4f82-9a8f-0527ec367368"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.888680 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.082856 4820 generic.go:334] "Generic (PLEG): container finished" podID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerID="511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60" exitCode=0 Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.082906 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpfw" event={"ID":"87e227d7-07f4-4f82-9a8f-0527ec367368","Type":"ContainerDied","Data":"511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60"} Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.082967 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpfw" event={"ID":"87e227d7-07f4-4f82-9a8f-0527ec367368","Type":"ContainerDied","Data":"771767d9cd3d95583fa8ce6ec1ccfb0d4f2276dc7d02d52e89caa08934a3a98f"} Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.082988 4820 scope.go:117] "RemoveContainer" containerID="511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.084069 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.112637 4820 scope.go:117] "RemoveContainer" containerID="437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.131330 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8fpfw"] Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.138613 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8fpfw"] Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.146407 4820 scope.go:117] "RemoveContainer" containerID="34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.212373 4820 scope.go:117] "RemoveContainer" containerID="511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60" Feb 21 08:38:23 crc kubenswrapper[4820]: E0221 08:38:23.213076 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60\": container with ID starting with 511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60 not found: ID does not exist" containerID="511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.213113 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60"} err="failed to get container status \"511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60\": rpc error: code = NotFound desc = could not find container \"511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60\": container with ID starting with 511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60 not found: ID does not exist" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.213139 4820 scope.go:117] "RemoveContainer" containerID="437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f" Feb 21 08:38:23 crc kubenswrapper[4820]: E0221 08:38:23.213991 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f\": container with ID starting with 437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f not found: ID does not exist" containerID="437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.214075 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f"} err="failed to get container status \"437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f\": rpc error: code = NotFound desc = could not find container \"437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f\": container with ID starting with 437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f not found: ID does not exist" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.214126 4820 scope.go:117] "RemoveContainer" containerID="34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367" Feb 21 08:38:23 crc kubenswrapper[4820]: E0221 08:38:23.214628 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367\": container with ID starting with 34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367 not found: ID does not exist" containerID="34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.214674 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367"} err="failed to get container status \"34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367\": rpc error: code = NotFound desc = could not find container \"34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367\": container with ID starting with 34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367 not found: ID does not exist" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.708292 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" path="/var/lib/kubelet/pods/87e227d7-07f4-4f82-9a8f-0527ec367368/volumes" Feb 21 08:38:25 crc kubenswrapper[4820]: I0221 08:38:25.913050 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d9pxh"] Feb 21 08:38:25 crc kubenswrapper[4820]: E0221 08:38:25.913845 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="extract-content" Feb 21 08:38:25 crc kubenswrapper[4820]: I0221 08:38:25.913861 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="extract-content" Feb 21 08:38:25 crc kubenswrapper[4820]: E0221 08:38:25.913880 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="extract-utilities" Feb 21 08:38:25 crc kubenswrapper[4820]: I0221 08:38:25.913889 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="extract-utilities" Feb 21 08:38:25 crc kubenswrapper[4820]: E0221 08:38:25.913900 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="registry-server" Feb 21 08:38:25 crc kubenswrapper[4820]: I0221 08:38:25.913909 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="registry-server" Feb 21 08:38:25 crc kubenswrapper[4820]: I0221 08:38:25.914115 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="registry-server" Feb 21 08:38:25 crc kubenswrapper[4820]: I0221 08:38:25.915862 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:25 crc kubenswrapper[4820]: I0221 08:38:25.925572 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9pxh"] Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.051938 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-utilities\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.052062 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn6vm\" (UniqueName: \"kubernetes.io/projected/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-kube-api-access-dn6vm\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.052144 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-catalog-content\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.153976 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-catalog-content\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.154095 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-utilities\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.154252 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn6vm\" (UniqueName: \"kubernetes.io/projected/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-kube-api-access-dn6vm\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.154758 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-utilities\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.154921 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-catalog-content\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.177887 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn6vm\" (UniqueName: \"kubernetes.io/projected/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-kube-api-access-dn6vm\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.249436 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.760021 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9pxh"] Feb 21 08:38:27 crc kubenswrapper[4820]: I0221 08:38:27.124569 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerStarted","Data":"8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22"} Feb 21 08:38:27 crc kubenswrapper[4820]: I0221 08:38:27.124969 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerStarted","Data":"5867d43fa922ffbae98259790d399db7d1b8ab2f0a64c395b0af3a9f3f6b381f"} Feb 21 08:38:28 crc kubenswrapper[4820]: I0221 08:38:28.135809 4820 generic.go:334] "Generic (PLEG): container finished" podID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerID="8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22" exitCode=0 Feb 21 08:38:28 crc kubenswrapper[4820]: I0221 08:38:28.135867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerDied","Data":"8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22"} Feb 21 08:38:29 crc kubenswrapper[4820]: I0221 08:38:29.147498 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerStarted","Data":"6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e"} Feb 21 08:38:32 crc kubenswrapper[4820]: I0221 08:38:32.173713 4820 generic.go:334] "Generic (PLEG): container finished" podID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerID="6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e" exitCode=0 Feb 21 08:38:32 crc kubenswrapper[4820]: I0221 08:38:32.173819 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerDied","Data":"6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e"} Feb 21 08:38:33 crc kubenswrapper[4820]: I0221 08:38:33.186875 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerStarted","Data":"5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5"} Feb 21 08:38:33 crc kubenswrapper[4820]: I0221 08:38:33.211844 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d9pxh" podStartSLOduration=3.589244064 podStartE2EDuration="8.211816418s" podCreationTimestamp="2026-02-21 08:38:25 +0000 UTC" firstStartedPulling="2026-02-21 08:38:28.139134675 +0000 UTC m=+6683.172218873" lastFinishedPulling="2026-02-21 08:38:32.761707029 +0000 UTC m=+6687.794791227" observedRunningTime="2026-02-21 08:38:33.203918605 +0000 UTC m=+6688.237002823" watchObservedRunningTime="2026-02-21 08:38:33.211816418 +0000 UTC m=+6688.244900616" Feb 21 08:38:36 crc kubenswrapper[4820]: I0221 08:38:36.250630 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:36 crc kubenswrapper[4820]: I0221 08:38:36.251196 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:36 crc kubenswrapper[4820]: I0221 08:38:36.299114 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:37 crc kubenswrapper[4820]: I0221 08:38:37.269977 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:37 crc kubenswrapper[4820]: I0221 08:38:37.330199 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9pxh"] Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.238071 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d9pxh" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="registry-server" containerID="cri-o://5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5" gracePeriod=2 Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.731010 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.915759 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-utilities\") pod \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.915935 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn6vm\" (UniqueName: \"kubernetes.io/projected/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-kube-api-access-dn6vm\") pod \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.916162 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-catalog-content\") pod \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.916777 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-utilities" (OuterVolumeSpecName: "utilities") pod "f0066fa0-f8d5-41f0-9661-d47a8a0e501d" (UID: "f0066fa0-f8d5-41f0-9661-d47a8a0e501d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.917644 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.923575 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-kube-api-access-dn6vm" (OuterVolumeSpecName: "kube-api-access-dn6vm") pod "f0066fa0-f8d5-41f0-9661-d47a8a0e501d" (UID: "f0066fa0-f8d5-41f0-9661-d47a8a0e501d"). InnerVolumeSpecName "kube-api-access-dn6vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.019955 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn6vm\" (UniqueName: \"kubernetes.io/projected/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-kube-api-access-dn6vm\") on node \"crc\" DevicePath \"\"" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.248646 4820 generic.go:334] "Generic (PLEG): container finished" podID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerID="5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5" exitCode=0 Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.248691 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerDied","Data":"5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5"} Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.248719 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerDied","Data":"5867d43fa922ffbae98259790d399db7d1b8ab2f0a64c395b0af3a9f3f6b381f"} Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.248740 4820 scope.go:117] "RemoveContainer" containerID="5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.248881 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.268527 4820 scope.go:117] "RemoveContainer" containerID="6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.286529 4820 scope.go:117] "RemoveContainer" containerID="8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.328995 4820 scope.go:117] "RemoveContainer" containerID="5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5" Feb 21 08:38:40 crc kubenswrapper[4820]: E0221 08:38:40.329485 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5\": container with ID starting with 5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5 not found: ID does not exist" containerID="5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.329524 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5"} err="failed to get container status \"5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5\": rpc error: code = NotFound desc = could not find container \"5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5\": container with ID starting with 5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5 not found: ID does not exist" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.329555 4820 scope.go:117] "RemoveContainer" containerID="6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e" Feb 21 08:38:40 crc kubenswrapper[4820]: E0221 08:38:40.330021 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e\": container with ID starting with 6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e not found: ID does not exist" containerID="6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.330073 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e"} err="failed to get container status \"6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e\": rpc error: code = NotFound desc = could not find container \"6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e\": container with ID starting with 6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e not found: ID does not exist" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.330108 4820 scope.go:117] "RemoveContainer" containerID="8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22" Feb 21 08:38:40 crc kubenswrapper[4820]: E0221 08:38:40.330507 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22\": container with ID starting with 8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22 not found: ID does not exist" containerID="8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.330546 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22"} err="failed to get container status \"8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22\": rpc error: code = NotFound desc = could not find container \"8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22\": container with ID starting with 8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22 not found: ID does not exist" Feb 21 08:38:41 crc kubenswrapper[4820]: I0221 08:38:41.043736 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0066fa0-f8d5-41f0-9661-d47a8a0e501d" (UID: "f0066fa0-f8d5-41f0-9661-d47a8a0e501d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:38:41 crc kubenswrapper[4820]: I0221 08:38:41.142749 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:38:41 crc kubenswrapper[4820]: I0221 08:38:41.186080 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9pxh"] Feb 21 08:38:41 crc kubenswrapper[4820]: I0221 08:38:41.196310 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d9pxh"] Feb 21 08:38:41 crc kubenswrapper[4820]: I0221 08:38:41.708597 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" path="/var/lib/kubelet/pods/f0066fa0-f8d5-41f0-9661-d47a8a0e501d/volumes" Feb 21 08:38:43 crc kubenswrapper[4820]: I0221 08:38:43.815948 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:38:43 crc kubenswrapper[4820]: I0221 08:38:43.816315 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:38:48 crc kubenswrapper[4820]: I0221 08:38:48.040078 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-rrxv7"] Feb 21 08:38:48 crc kubenswrapper[4820]: I0221 08:38:48.056256 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-rrxv7"] Feb 21 08:38:49 crc kubenswrapper[4820]: I0221 08:38:49.039188 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-1ff2-account-create-update-lcrwl"] Feb 21 08:38:49 crc kubenswrapper[4820]: I0221 08:38:49.048151 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-1ff2-account-create-update-lcrwl"] Feb 21 08:38:49 crc kubenswrapper[4820]: I0221 08:38:49.713813 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b874f59-5a8f-4ecc-8405-4993b1fe7fc2" path="/var/lib/kubelet/pods/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2/volumes" Feb 21 08:38:49 crc kubenswrapper[4820]: I0221 08:38:49.715082 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e9afae-f779-41ff-af87-712577c90f88" path="/var/lib/kubelet/pods/c1e9afae-f779-41ff-af87-712577c90f88/volumes" Feb 21 08:39:05 crc kubenswrapper[4820]: I0221 08:39:05.035178 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-qk6xf"] Feb 21 08:39:05 crc kubenswrapper[4820]: I0221 08:39:05.049871 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-qk6xf"] Feb 21 08:39:05 crc kubenswrapper[4820]: I0221 08:39:05.709984 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" path="/var/lib/kubelet/pods/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a/volumes" Feb 21 08:39:13 crc kubenswrapper[4820]: I0221 08:39:13.815909 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:39:13 crc kubenswrapper[4820]: I0221 08:39:13.816534 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:39:13 crc kubenswrapper[4820]: I0221 08:39:13.816584 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:39:13 crc kubenswrapper[4820]: I0221 08:39:13.817346 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:39:13 crc kubenswrapper[4820]: I0221 08:39:13.817401 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" gracePeriod=600 Feb 21 08:39:14 crc kubenswrapper[4820]: E0221 08:39:14.038112 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:39:14 crc kubenswrapper[4820]: I0221 08:39:14.545717 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" exitCode=0 Feb 21 08:39:14 crc kubenswrapper[4820]: I0221 08:39:14.545784 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e"} Feb 21 08:39:14 crc kubenswrapper[4820]: I0221 08:39:14.545825 4820 scope.go:117] "RemoveContainer" containerID="9c0f51850c3ee976a89c7e9ab65d7e1bbc5ca6ad0bc6054c44f678cb78a80885" Feb 21 08:39:14 crc kubenswrapper[4820]: I0221 08:39:14.546706 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:39:14 crc kubenswrapper[4820]: E0221 08:39:14.547070 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:39:25 crc kubenswrapper[4820]: I0221 08:39:25.930176 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8gpq6"] Feb 21 08:39:25 crc kubenswrapper[4820]: E0221 08:39:25.931277 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="registry-server" Feb 21 08:39:25 crc kubenswrapper[4820]: I0221 08:39:25.931295 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="registry-server" Feb 21 08:39:25 crc kubenswrapper[4820]: E0221 08:39:25.931311 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="extract-content" Feb 21 08:39:25 crc kubenswrapper[4820]: I0221 08:39:25.931318 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="extract-content" Feb 21 08:39:25 crc kubenswrapper[4820]: E0221 08:39:25.931346 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="extract-utilities" Feb 21 08:39:25 crc kubenswrapper[4820]: I0221 08:39:25.931355 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="extract-utilities" Feb 21 08:39:25 crc kubenswrapper[4820]: I0221 08:39:25.931584 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="registry-server" Feb 21 08:39:25 crc kubenswrapper[4820]: I0221 08:39:25.933964 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:25 crc kubenswrapper[4820]: I0221 08:39:25.947570 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gpq6"] Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.075177 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2bh2\" (UniqueName: \"kubernetes.io/projected/56a084d3-5261-4bd8-9d65-ec3b63e30653-kube-api-access-d2bh2\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.075309 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-utilities\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.075472 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-catalog-content\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.177527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2bh2\" (UniqueName: \"kubernetes.io/projected/56a084d3-5261-4bd8-9d65-ec3b63e30653-kube-api-access-d2bh2\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.177585 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-utilities\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.177741 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-catalog-content\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.178212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-utilities\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.178217 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-catalog-content\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.205393 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2bh2\" (UniqueName: \"kubernetes.io/projected/56a084d3-5261-4bd8-9d65-ec3b63e30653-kube-api-access-d2bh2\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.255948 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.697264 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:39:26 crc kubenswrapper[4820]: E0221 08:39:26.698023 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.843440 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gpq6"] Feb 21 08:39:27 crc kubenswrapper[4820]: I0221 08:39:27.654477 4820 generic.go:334] "Generic (PLEG): container finished" podID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerID="4ca5aa4161db0a6f9ced27acbbbefc5782f674112008fea83450ac70043bdd6a" exitCode=0 Feb 21 08:39:27 crc kubenswrapper[4820]: I0221 08:39:27.654531 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gpq6" event={"ID":"56a084d3-5261-4bd8-9d65-ec3b63e30653","Type":"ContainerDied","Data":"4ca5aa4161db0a6f9ced27acbbbefc5782f674112008fea83450ac70043bdd6a"} Feb 21 08:39:27 crc kubenswrapper[4820]: I0221 08:39:27.654798 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gpq6" event={"ID":"56a084d3-5261-4bd8-9d65-ec3b63e30653","Type":"ContainerStarted","Data":"70bbf3d2c2578d3d824cbe35868ac37f6ebd2b422ed2d2573ff8688f77bd092e"} Feb 21 08:39:29 crc kubenswrapper[4820]: I0221 08:39:29.673578 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gpq6" event={"ID":"56a084d3-5261-4bd8-9d65-ec3b63e30653","Type":"ContainerStarted","Data":"9f1fc12197d782422de5899bf8ed0590864432db61ba79b448595bd9f602492b"} Feb 21 08:39:30 crc kubenswrapper[4820]: I0221 08:39:30.683795 4820 generic.go:334] "Generic (PLEG): container finished" podID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerID="9f1fc12197d782422de5899bf8ed0590864432db61ba79b448595bd9f602492b" exitCode=0 Feb 21 08:39:30 crc kubenswrapper[4820]: I0221 08:39:30.683843 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gpq6" event={"ID":"56a084d3-5261-4bd8-9d65-ec3b63e30653","Type":"ContainerDied","Data":"9f1fc12197d782422de5899bf8ed0590864432db61ba79b448595bd9f602492b"} Feb 21 08:39:32 crc kubenswrapper[4820]: I0221 08:39:32.703913 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gpq6" event={"ID":"56a084d3-5261-4bd8-9d65-ec3b63e30653","Type":"ContainerStarted","Data":"7aa50a0869bbbd9f3d0aabcfb3e3fd90360eaa62d72c2439860d75f5821f4008"} Feb 21 08:39:32 crc kubenswrapper[4820]: I0221 08:39:32.730727 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8gpq6" podStartSLOduration=3.297798436 podStartE2EDuration="7.730705849s" podCreationTimestamp="2026-02-21 08:39:25 +0000 UTC" firstStartedPulling="2026-02-21 08:39:27.657755989 +0000 UTC m=+6742.690840187" lastFinishedPulling="2026-02-21 08:39:32.090663402 +0000 UTC m=+6747.123747600" observedRunningTime="2026-02-21 08:39:32.721735058 +0000 UTC m=+6747.754819256" watchObservedRunningTime="2026-02-21 08:39:32.730705849 +0000 UTC m=+6747.763790047" Feb 21 08:39:35 crc kubenswrapper[4820]: I0221 08:39:35.837743 4820 scope.go:117] "RemoveContainer" containerID="0fa05988329236af07673909477dc89b9d1d1084c3a32b7028ed0991a796e02a" Feb 21 08:39:35 crc kubenswrapper[4820]: I0221 08:39:35.947754 4820 scope.go:117] "RemoveContainer" containerID="633afaacce752e65a5261410e5e1ea5326c34bca69f027a178d324465a8a3bac" Feb 21 08:39:36 crc kubenswrapper[4820]: I0221 08:39:36.256120 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:36 crc kubenswrapper[4820]: I0221 08:39:36.256466 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:36 crc kubenswrapper[4820]: I0221 08:39:36.294652 4820 scope.go:117] "RemoveContainer" containerID="283eeb9dc122d4cc0bc63ade7d171e6d57a57e8406097e757d3cb60f5fa2fcfe" Feb 21 08:39:36 crc kubenswrapper[4820]: I0221 08:39:36.300866 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:40 crc kubenswrapper[4820]: I0221 08:39:40.696890 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:39:40 crc kubenswrapper[4820]: E0221 08:39:40.697433 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:39:44 crc kubenswrapper[4820]: I0221 08:39:44.981617 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zx954"] Feb 21 08:39:44 crc kubenswrapper[4820]: I0221 08:39:44.984025 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:44 crc kubenswrapper[4820]: I0221 08:39:44.993051 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zx954"] Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.056744 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-catalog-content\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.056892 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9wc\" (UniqueName: \"kubernetes.io/projected/10c219c3-571a-4a37-9baf-065b6ccbf560-kube-api-access-xg9wc\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.057055 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-utilities\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.159546 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-catalog-content\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.159607 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg9wc\" (UniqueName: \"kubernetes.io/projected/10c219c3-571a-4a37-9baf-065b6ccbf560-kube-api-access-xg9wc\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.159665 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-utilities\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.160225 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-catalog-content\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.160344 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-utilities\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.181125 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg9wc\" (UniqueName: \"kubernetes.io/projected/10c219c3-571a-4a37-9baf-065b6ccbf560-kube-api-access-xg9wc\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.325352 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.902671 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zx954"] Feb 21 08:39:45 crc kubenswrapper[4820]: W0221 08:39:45.903664 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10c219c3_571a_4a37_9baf_065b6ccbf560.slice/crio-b3b5baf5e7a2517a28f184c97764a96449aa1935f2eb5fae1d120ab8928cf797 WatchSource:0}: Error finding container b3b5baf5e7a2517a28f184c97764a96449aa1935f2eb5fae1d120ab8928cf797: Status 404 returned error can't find the container with id b3b5baf5e7a2517a28f184c97764a96449aa1935f2eb5fae1d120ab8928cf797 Feb 21 08:39:46 crc kubenswrapper[4820]: I0221 08:39:46.315912 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:46 crc kubenswrapper[4820]: I0221 08:39:46.842507 4820 generic.go:334] "Generic (PLEG): container finished" podID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerID="ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac" exitCode=0 Feb 21 08:39:46 crc kubenswrapper[4820]: I0221 08:39:46.842566 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zx954" event={"ID":"10c219c3-571a-4a37-9baf-065b6ccbf560","Type":"ContainerDied","Data":"ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac"} Feb 21 08:39:46 crc kubenswrapper[4820]: I0221 08:39:46.842603 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zx954" event={"ID":"10c219c3-571a-4a37-9baf-065b6ccbf560","Type":"ContainerStarted","Data":"b3b5baf5e7a2517a28f184c97764a96449aa1935f2eb5fae1d120ab8928cf797"} Feb 21 08:39:47 crc kubenswrapper[4820]: I0221 08:39:47.852309 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zx954" event={"ID":"10c219c3-571a-4a37-9baf-065b6ccbf560","Type":"ContainerStarted","Data":"bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f"} Feb 21 08:39:48 crc kubenswrapper[4820]: I0221 08:39:48.559739 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gpq6"] Feb 21 08:39:48 crc kubenswrapper[4820]: I0221 08:39:48.560304 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8gpq6" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="registry-server" containerID="cri-o://7aa50a0869bbbd9f3d0aabcfb3e3fd90360eaa62d72c2439860d75f5821f4008" gracePeriod=2 Feb 21 08:39:48 crc kubenswrapper[4820]: I0221 08:39:48.880442 4820 generic.go:334] "Generic (PLEG): container finished" podID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerID="7aa50a0869bbbd9f3d0aabcfb3e3fd90360eaa62d72c2439860d75f5821f4008" exitCode=0 Feb 21 08:39:48 crc kubenswrapper[4820]: I0221 08:39:48.880550 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gpq6" event={"ID":"56a084d3-5261-4bd8-9d65-ec3b63e30653","Type":"ContainerDied","Data":"7aa50a0869bbbd9f3d0aabcfb3e3fd90360eaa62d72c2439860d75f5821f4008"} Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.064617 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.144650 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-catalog-content\") pod \"56a084d3-5261-4bd8-9d65-ec3b63e30653\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.144857 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-utilities\") pod \"56a084d3-5261-4bd8-9d65-ec3b63e30653\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.145004 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2bh2\" (UniqueName: \"kubernetes.io/projected/56a084d3-5261-4bd8-9d65-ec3b63e30653-kube-api-access-d2bh2\") pod \"56a084d3-5261-4bd8-9d65-ec3b63e30653\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.150374 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56a084d3-5261-4bd8-9d65-ec3b63e30653-kube-api-access-d2bh2" (OuterVolumeSpecName: "kube-api-access-d2bh2") pod "56a084d3-5261-4bd8-9d65-ec3b63e30653" (UID: "56a084d3-5261-4bd8-9d65-ec3b63e30653"). InnerVolumeSpecName "kube-api-access-d2bh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.153137 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-utilities" (OuterVolumeSpecName: "utilities") pod "56a084d3-5261-4bd8-9d65-ec3b63e30653" (UID: "56a084d3-5261-4bd8-9d65-ec3b63e30653"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.179264 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56a084d3-5261-4bd8-9d65-ec3b63e30653" (UID: "56a084d3-5261-4bd8-9d65-ec3b63e30653"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.247716 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.247755 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2bh2\" (UniqueName: \"kubernetes.io/projected/56a084d3-5261-4bd8-9d65-ec3b63e30653-kube-api-access-d2bh2\") on node \"crc\" DevicePath \"\"" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.247767 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.930578 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gpq6" event={"ID":"56a084d3-5261-4bd8-9d65-ec3b63e30653","Type":"ContainerDied","Data":"70bbf3d2c2578d3d824cbe35868ac37f6ebd2b422ed2d2573ff8688f77bd092e"} Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.931058 4820 scope.go:117] "RemoveContainer" containerID="7aa50a0869bbbd9f3d0aabcfb3e3fd90360eaa62d72c2439860d75f5821f4008" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.931331 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.974499 4820 scope.go:117] "RemoveContainer" containerID="9f1fc12197d782422de5899bf8ed0590864432db61ba79b448595bd9f602492b" Feb 21 08:39:50 crc kubenswrapper[4820]: I0221 08:39:50.005778 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gpq6"] Feb 21 08:39:50 crc kubenswrapper[4820]: I0221 08:39:50.017569 4820 scope.go:117] "RemoveContainer" containerID="4ca5aa4161db0a6f9ced27acbbbefc5782f674112008fea83450ac70043bdd6a" Feb 21 08:39:50 crc kubenswrapper[4820]: I0221 08:39:50.018061 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gpq6"] Feb 21 08:39:50 crc kubenswrapper[4820]: I0221 08:39:50.942444 4820 generic.go:334] "Generic (PLEG): container finished" podID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerID="bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f" exitCode=0 Feb 21 08:39:50 crc kubenswrapper[4820]: I0221 08:39:50.942656 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zx954" event={"ID":"10c219c3-571a-4a37-9baf-065b6ccbf560","Type":"ContainerDied","Data":"bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f"} Feb 21 08:39:51 crc kubenswrapper[4820]: I0221 08:39:51.696850 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:39:51 crc kubenswrapper[4820]: E0221 08:39:51.697133 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:39:51 crc kubenswrapper[4820]: I0221 08:39:51.707885 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" path="/var/lib/kubelet/pods/56a084d3-5261-4bd8-9d65-ec3b63e30653/volumes" Feb 21 08:39:52 crc kubenswrapper[4820]: I0221 08:39:52.966983 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zx954" event={"ID":"10c219c3-571a-4a37-9baf-065b6ccbf560","Type":"ContainerStarted","Data":"88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32"} Feb 21 08:39:52 crc kubenswrapper[4820]: I0221 08:39:52.990723 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zx954" podStartSLOduration=3.9662401000000003 podStartE2EDuration="8.990699863s" podCreationTimestamp="2026-02-21 08:39:44 +0000 UTC" firstStartedPulling="2026-02-21 08:39:46.844204634 +0000 UTC m=+6761.877288832" lastFinishedPulling="2026-02-21 08:39:51.868664397 +0000 UTC m=+6766.901748595" observedRunningTime="2026-02-21 08:39:52.987490377 +0000 UTC m=+6768.020574595" watchObservedRunningTime="2026-02-21 08:39:52.990699863 +0000 UTC m=+6768.023784061" Feb 21 08:39:55 crc kubenswrapper[4820]: I0221 08:39:55.326444 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:55 crc kubenswrapper[4820]: I0221 08:39:55.328518 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:55 crc kubenswrapper[4820]: I0221 08:39:55.373566 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:40:05 crc kubenswrapper[4820]: I0221 08:40:05.373114 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:40:05 crc kubenswrapper[4820]: I0221 08:40:05.424439 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zx954"] Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.114183 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zx954" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="registry-server" containerID="cri-o://88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32" gracePeriod=2 Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.580552 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.626231 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-utilities\") pod \"10c219c3-571a-4a37-9baf-065b6ccbf560\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.626432 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg9wc\" (UniqueName: \"kubernetes.io/projected/10c219c3-571a-4a37-9baf-065b6ccbf560-kube-api-access-xg9wc\") pod \"10c219c3-571a-4a37-9baf-065b6ccbf560\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.626466 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-catalog-content\") pod \"10c219c3-571a-4a37-9baf-065b6ccbf560\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.627457 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-utilities" (OuterVolumeSpecName: "utilities") pod "10c219c3-571a-4a37-9baf-065b6ccbf560" (UID: "10c219c3-571a-4a37-9baf-065b6ccbf560"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.634459 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c219c3-571a-4a37-9baf-065b6ccbf560-kube-api-access-xg9wc" (OuterVolumeSpecName: "kube-api-access-xg9wc") pod "10c219c3-571a-4a37-9baf-065b6ccbf560" (UID: "10c219c3-571a-4a37-9baf-065b6ccbf560"). InnerVolumeSpecName "kube-api-access-xg9wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.692257 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10c219c3-571a-4a37-9baf-065b6ccbf560" (UID: "10c219c3-571a-4a37-9baf-065b6ccbf560"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.696674 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:40:06 crc kubenswrapper[4820]: E0221 08:40:06.697057 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.728495 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.728548 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg9wc\" (UniqueName: \"kubernetes.io/projected/10c219c3-571a-4a37-9baf-065b6ccbf560-kube-api-access-xg9wc\") on node \"crc\" DevicePath \"\"" Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.728562 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.124101 4820 generic.go:334] "Generic (PLEG): container finished" podID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerID="88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32" exitCode=0 Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.124148 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zx954" event={"ID":"10c219c3-571a-4a37-9baf-065b6ccbf560","Type":"ContainerDied","Data":"88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32"} Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.124175 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zx954" event={"ID":"10c219c3-571a-4a37-9baf-065b6ccbf560","Type":"ContainerDied","Data":"b3b5baf5e7a2517a28f184c97764a96449aa1935f2eb5fae1d120ab8928cf797"} Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.124188 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.124203 4820 scope.go:117] "RemoveContainer" containerID="88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.149525 4820 scope.go:117] "RemoveContainer" containerID="bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.165276 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zx954"] Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.174094 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zx954"] Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.189658 4820 scope.go:117] "RemoveContainer" containerID="ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.214607 4820 scope.go:117] "RemoveContainer" containerID="88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32" Feb 21 08:40:07 crc kubenswrapper[4820]: E0221 08:40:07.214984 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32\": container with ID starting with 88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32 not found: ID does not exist" containerID="88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.215022 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32"} err="failed to get container status \"88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32\": rpc error: code = NotFound desc = could not find container \"88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32\": container with ID starting with 88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32 not found: ID does not exist" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.215048 4820 scope.go:117] "RemoveContainer" containerID="bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f" Feb 21 08:40:07 crc kubenswrapper[4820]: E0221 08:40:07.215407 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f\": container with ID starting with bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f not found: ID does not exist" containerID="bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.215435 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f"} err="failed to get container status \"bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f\": rpc error: code = NotFound desc = could not find container \"bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f\": container with ID starting with bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f not found: ID does not exist" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.215449 4820 scope.go:117] "RemoveContainer" containerID="ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac" Feb 21 08:40:07 crc kubenswrapper[4820]: E0221 08:40:07.215844 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac\": container with ID starting with ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac not found: ID does not exist" containerID="ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.215878 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac"} err="failed to get container status \"ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac\": rpc error: code = NotFound desc = could not find container \"ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac\": container with ID starting with ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac not found: ID does not exist" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.708401 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" path="/var/lib/kubelet/pods/10c219c3-571a-4a37-9baf-065b6ccbf560/volumes" Feb 21 08:40:18 crc kubenswrapper[4820]: I0221 08:40:18.697064 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:40:18 crc kubenswrapper[4820]: E0221 08:40:18.697765 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:40:33 crc kubenswrapper[4820]: I0221 08:40:33.696863 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:40:33 crc kubenswrapper[4820]: E0221 08:40:33.697758 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:40:45 crc kubenswrapper[4820]: I0221 08:40:45.705099 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:40:45 crc kubenswrapper[4820]: E0221 08:40:45.706184 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:41:00 crc kubenswrapper[4820]: I0221 08:41:00.697403 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:41:00 crc kubenswrapper[4820]: E0221 08:41:00.698185 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:41:12 crc kubenswrapper[4820]: I0221 08:41:12.696590 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:41:12 crc kubenswrapper[4820]: E0221 08:41:12.698209 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:41:14 crc kubenswrapper[4820]: I0221 08:41:14.697003 4820 generic.go:334] "Generic (PLEG): container finished" podID="8acec915-5e23-4212-9bce-50fec475c433" containerID="ec1a4c393a9121270be39171bf2da08c8a063040bae700684bfcf3b9d8f4d3c2" exitCode=0 Feb 21 08:41:14 crc kubenswrapper[4820]: I0221 08:41:14.697081 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" event={"ID":"8acec915-5e23-4212-9bce-50fec475c433","Type":"ContainerDied","Data":"ec1a4c393a9121270be39171bf2da08c8a063040bae700684bfcf3b9d8f4d3c2"} Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.145112 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.200407 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxkx8\" (UniqueName: \"kubernetes.io/projected/8acec915-5e23-4212-9bce-50fec475c433-kube-api-access-zxkx8\") pod \"8acec915-5e23-4212-9bce-50fec475c433\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.200518 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-tripleo-cleanup-combined-ca-bundle\") pod \"8acec915-5e23-4212-9bce-50fec475c433\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.201322 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-inventory\") pod \"8acec915-5e23-4212-9bce-50fec475c433\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.201393 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-ssh-key-openstack-cell1\") pod \"8acec915-5e23-4212-9bce-50fec475c433\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.205714 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8acec915-5e23-4212-9bce-50fec475c433-kube-api-access-zxkx8" (OuterVolumeSpecName: "kube-api-access-zxkx8") pod "8acec915-5e23-4212-9bce-50fec475c433" (UID: "8acec915-5e23-4212-9bce-50fec475c433"). InnerVolumeSpecName "kube-api-access-zxkx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.206117 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "8acec915-5e23-4212-9bce-50fec475c433" (UID: "8acec915-5e23-4212-9bce-50fec475c433"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.226642 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "8acec915-5e23-4212-9bce-50fec475c433" (UID: "8acec915-5e23-4212-9bce-50fec475c433"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.227880 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-inventory" (OuterVolumeSpecName: "inventory") pod "8acec915-5e23-4212-9bce-50fec475c433" (UID: "8acec915-5e23-4212-9bce-50fec475c433"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.303566 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxkx8\" (UniqueName: \"kubernetes.io/projected/8acec915-5e23-4212-9bce-50fec475c433-kube-api-access-zxkx8\") on node \"crc\" DevicePath \"\"" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.303612 4820 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.303624 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.303634 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.714812 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" event={"ID":"8acec915-5e23-4212-9bce-50fec475c433","Type":"ContainerDied","Data":"bfde1cc6b595c74e965cbaa1483573efc5b20ea19714eb3b49b85d75603b542d"} Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.714856 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfde1cc6b595c74e965cbaa1483573efc5b20ea19714eb3b49b85d75603b542d" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.714867 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:41:25 crc kubenswrapper[4820]: I0221 08:41:25.704786 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:41:25 crc kubenswrapper[4820]: E0221 08:41:25.706798 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.772595 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-h8h82"] Feb 21 08:41:29 crc kubenswrapper[4820]: E0221 08:41:29.773361 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="extract-utilities" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773381 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="extract-utilities" Feb 21 08:41:29 crc kubenswrapper[4820]: E0221 08:41:29.773396 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="extract-content" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773404 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="extract-content" Feb 21 08:41:29 crc kubenswrapper[4820]: E0221 08:41:29.773429 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8acec915-5e23-4212-9bce-50fec475c433" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773439 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8acec915-5e23-4212-9bce-50fec475c433" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 21 08:41:29 crc kubenswrapper[4820]: E0221 08:41:29.773454 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="extract-content" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773461 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="extract-content" Feb 21 08:41:29 crc kubenswrapper[4820]: E0221 08:41:29.773481 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="extract-utilities" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773488 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="extract-utilities" Feb 21 08:41:29 crc kubenswrapper[4820]: E0221 08:41:29.773507 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="registry-server" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773514 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="registry-server" Feb 21 08:41:29 crc kubenswrapper[4820]: E0221 08:41:29.773528 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="registry-server" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773535 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="registry-server" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773743 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="registry-server" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773764 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="registry-server" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773790 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8acec915-5e23-4212-9bce-50fec475c433" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.774667 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.782754 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.782757 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.782873 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.783020 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.787802 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-h8h82"] Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.886274 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-inventory\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.886363 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.886578 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.886682 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v978r\" (UniqueName: \"kubernetes.io/projected/b328f114-e2a2-4fe6-9e6d-bf8a99364733-kube-api-access-v978r\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.988542 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.988629 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v978r\" (UniqueName: \"kubernetes.io/projected/b328f114-e2a2-4fe6-9e6d-bf8a99364733-kube-api-access-v978r\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.988778 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-inventory\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.988879 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.994882 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.995431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-inventory\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.995561 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:30 crc kubenswrapper[4820]: I0221 08:41:30.010005 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v978r\" (UniqueName: \"kubernetes.io/projected/b328f114-e2a2-4fe6-9e6d-bf8a99364733-kube-api-access-v978r\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:30 crc kubenswrapper[4820]: I0221 08:41:30.094099 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:30 crc kubenswrapper[4820]: I0221 08:41:30.670024 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-h8h82"] Feb 21 08:41:30 crc kubenswrapper[4820]: I0221 08:41:30.830649 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" event={"ID":"b328f114-e2a2-4fe6-9e6d-bf8a99364733","Type":"ContainerStarted","Data":"939a170ca8bc8c0b04bcd9b59224d97aba76382b2f158866460750f35f9310c7"} Feb 21 08:41:31 crc kubenswrapper[4820]: I0221 08:41:31.842368 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" event={"ID":"b328f114-e2a2-4fe6-9e6d-bf8a99364733","Type":"ContainerStarted","Data":"5ceefbffe3a7e6b50bbc1012002add2d32abc6f4e6711dc8443fbd09e19e6cf2"} Feb 21 08:41:31 crc kubenswrapper[4820]: I0221 08:41:31.867666 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" podStartSLOduration=2.426498957 podStartE2EDuration="2.867591793s" podCreationTimestamp="2026-02-21 08:41:29 +0000 UTC" firstStartedPulling="2026-02-21 08:41:30.671170242 +0000 UTC m=+6865.704254440" lastFinishedPulling="2026-02-21 08:41:31.112263078 +0000 UTC m=+6866.145347276" observedRunningTime="2026-02-21 08:41:31.864662494 +0000 UTC m=+6866.897746702" watchObservedRunningTime="2026-02-21 08:41:31.867591793 +0000 UTC m=+6866.900675991" Feb 21 08:41:40 crc kubenswrapper[4820]: I0221 08:41:40.696965 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:41:40 crc kubenswrapper[4820]: E0221 08:41:40.697785 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:41:53 crc kubenswrapper[4820]: I0221 08:41:53.697785 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:41:53 crc kubenswrapper[4820]: E0221 08:41:53.698422 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:42:04 crc kubenswrapper[4820]: I0221 08:42:04.697098 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:42:04 crc kubenswrapper[4820]: E0221 08:42:04.697874 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:42:17 crc kubenswrapper[4820]: I0221 08:42:17.696621 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:42:17 crc kubenswrapper[4820]: E0221 08:42:17.697328 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:42:28 crc kubenswrapper[4820]: I0221 08:42:28.696558 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:42:28 crc kubenswrapper[4820]: E0221 08:42:28.697304 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:42:39 crc kubenswrapper[4820]: I0221 08:42:39.696729 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:42:39 crc kubenswrapper[4820]: E0221 08:42:39.699148 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:42:54 crc kubenswrapper[4820]: I0221 08:42:54.697001 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:42:54 crc kubenswrapper[4820]: E0221 08:42:54.697914 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:43:05 crc kubenswrapper[4820]: I0221 08:43:05.707041 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:43:05 crc kubenswrapper[4820]: E0221 08:43:05.708044 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:43:20 crc kubenswrapper[4820]: I0221 08:43:20.697780 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:43:20 crc kubenswrapper[4820]: E0221 08:43:20.700410 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:43:35 crc kubenswrapper[4820]: I0221 08:43:35.708358 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:43:35 crc kubenswrapper[4820]: E0221 08:43:35.710896 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:43:46 crc kubenswrapper[4820]: I0221 08:43:46.697910 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:43:46 crc kubenswrapper[4820]: E0221 08:43:46.699187 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:44:00 crc kubenswrapper[4820]: I0221 08:44:00.697277 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:44:00 crc kubenswrapper[4820]: E0221 08:44:00.698757 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:44:13 crc kubenswrapper[4820]: I0221 08:44:13.697027 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:44:13 crc kubenswrapper[4820]: E0221 08:44:13.697889 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:44:28 crc kubenswrapper[4820]: I0221 08:44:28.697576 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:44:29 crc kubenswrapper[4820]: I0221 08:44:29.486190 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"1e52876a5d65e4dbfbc3bbb405ed2e1fc047a888cbb3cf03140368de5b7b9380"} Feb 21 08:44:43 crc kubenswrapper[4820]: I0221 08:44:43.602120 4820 generic.go:334] "Generic (PLEG): container finished" podID="b328f114-e2a2-4fe6-9e6d-bf8a99364733" containerID="5ceefbffe3a7e6b50bbc1012002add2d32abc6f4e6711dc8443fbd09e19e6cf2" exitCode=0 Feb 21 08:44:43 crc kubenswrapper[4820]: I0221 08:44:43.602277 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" event={"ID":"b328f114-e2a2-4fe6-9e6d-bf8a99364733","Type":"ContainerDied","Data":"5ceefbffe3a7e6b50bbc1012002add2d32abc6f4e6711dc8443fbd09e19e6cf2"} Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.133008 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.230978 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v978r\" (UniqueName: \"kubernetes.io/projected/b328f114-e2a2-4fe6-9e6d-bf8a99364733-kube-api-access-v978r\") pod \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.231398 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-inventory\") pod \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.231553 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-ssh-key-openstack-cell1\") pod \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.231653 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-bootstrap-combined-ca-bundle\") pod \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.236589 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b328f114-e2a2-4fe6-9e6d-bf8a99364733" (UID: "b328f114-e2a2-4fe6-9e6d-bf8a99364733"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.236813 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b328f114-e2a2-4fe6-9e6d-bf8a99364733-kube-api-access-v978r" (OuterVolumeSpecName: "kube-api-access-v978r") pod "b328f114-e2a2-4fe6-9e6d-bf8a99364733" (UID: "b328f114-e2a2-4fe6-9e6d-bf8a99364733"). InnerVolumeSpecName "kube-api-access-v978r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.261023 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-inventory" (OuterVolumeSpecName: "inventory") pod "b328f114-e2a2-4fe6-9e6d-bf8a99364733" (UID: "b328f114-e2a2-4fe6-9e6d-bf8a99364733"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.265523 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "b328f114-e2a2-4fe6-9e6d-bf8a99364733" (UID: "b328f114-e2a2-4fe6-9e6d-bf8a99364733"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.334375 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v978r\" (UniqueName: \"kubernetes.io/projected/b328f114-e2a2-4fe6-9e6d-bf8a99364733-kube-api-access-v978r\") on node \"crc\" DevicePath \"\"" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.334408 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.334419 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.334427 4820 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.624876 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" event={"ID":"b328f114-e2a2-4fe6-9e6d-bf8a99364733","Type":"ContainerDied","Data":"939a170ca8bc8c0b04bcd9b59224d97aba76382b2f158866460750f35f9310c7"} Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.625295 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="939a170ca8bc8c0b04bcd9b59224d97aba76382b2f158866460750f35f9310c7" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.624957 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.724114 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-bdzjs"] Feb 21 08:44:45 crc kubenswrapper[4820]: E0221 08:44:45.724519 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b328f114-e2a2-4fe6-9e6d-bf8a99364733" containerName="bootstrap-openstack-openstack-cell1" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.724538 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b328f114-e2a2-4fe6-9e6d-bf8a99364733" containerName="bootstrap-openstack-openstack-cell1" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.724788 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b328f114-e2a2-4fe6-9e6d-bf8a99364733" containerName="bootstrap-openstack-openstack-cell1" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.726655 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.729476 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.731645 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.731815 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.731973 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.736381 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-bdzjs"] Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.846541 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.846655 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5d9l\" (UniqueName: \"kubernetes.io/projected/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-kube-api-access-z5d9l\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.846686 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-inventory\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.949193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.950175 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5d9l\" (UniqueName: \"kubernetes.io/projected/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-kube-api-access-z5d9l\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.950203 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-inventory\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.953636 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.954104 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-inventory\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.969798 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5d9l\" (UniqueName: \"kubernetes.io/projected/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-kube-api-access-z5d9l\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:46 crc kubenswrapper[4820]: I0221 08:44:46.051704 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:46 crc kubenswrapper[4820]: I0221 08:44:46.578516 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:44:46 crc kubenswrapper[4820]: I0221 08:44:46.602178 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-bdzjs"] Feb 21 08:44:46 crc kubenswrapper[4820]: I0221 08:44:46.637093 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" event={"ID":"26d06bf4-eb66-4688-a6ba-292af8a3b9f5","Type":"ContainerStarted","Data":"d304e4893e4a4885f85f938002c8c75a860b24a24145fb223319d0af26918d18"} Feb 21 08:44:47 crc kubenswrapper[4820]: I0221 08:44:47.649279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" event={"ID":"26d06bf4-eb66-4688-a6ba-292af8a3b9f5","Type":"ContainerStarted","Data":"1d4f0c693534689f47073c82a3834b39e5d861ea6be879a690c33b30f6e2157b"} Feb 21 08:44:47 crc kubenswrapper[4820]: I0221 08:44:47.678777 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" podStartSLOduration=2.185582396 podStartE2EDuration="2.67875432s" podCreationTimestamp="2026-02-21 08:44:45 +0000 UTC" firstStartedPulling="2026-02-21 08:44:46.57825853 +0000 UTC m=+7061.611342728" lastFinishedPulling="2026-02-21 08:44:47.071430454 +0000 UTC m=+7062.104514652" observedRunningTime="2026-02-21 08:44:47.668696436 +0000 UTC m=+7062.701780634" watchObservedRunningTime="2026-02-21 08:44:47.67875432 +0000 UTC m=+7062.711838518" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.144525 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8"] Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.146671 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.148872 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.154980 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.173948 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8"] Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.251538 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrhz\" (UniqueName: \"kubernetes.io/projected/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-kube-api-access-lfrhz\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.251622 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-config-volume\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.251774 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-secret-volume\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.353934 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfrhz\" (UniqueName: \"kubernetes.io/projected/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-kube-api-access-lfrhz\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.354009 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-config-volume\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.354100 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-secret-volume\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.355369 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-config-volume\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.370016 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-secret-volume\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.370112 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfrhz\" (UniqueName: \"kubernetes.io/projected/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-kube-api-access-lfrhz\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.475400 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.911185 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8"] Feb 21 08:45:00 crc kubenswrapper[4820]: W0221 08:45:00.921378 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode44294e9_1a1b_421f_bed6_f72a8bb45e1d.slice/crio-46758fe39c9534dfe81354a9889199d074af41843e3d4382ff1a479d09568995 WatchSource:0}: Error finding container 46758fe39c9534dfe81354a9889199d074af41843e3d4382ff1a479d09568995: Status 404 returned error can't find the container with id 46758fe39c9534dfe81354a9889199d074af41843e3d4382ff1a479d09568995 Feb 21 08:45:01 crc kubenswrapper[4820]: I0221 08:45:01.767350 4820 generic.go:334] "Generic (PLEG): container finished" podID="e44294e9-1a1b-421f-bed6-f72a8bb45e1d" containerID="d355316426a1db688b7e0f637002731b78bea683453439286ba724dcfa414dc2" exitCode=0 Feb 21 08:45:01 crc kubenswrapper[4820]: I0221 08:45:01.767407 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" event={"ID":"e44294e9-1a1b-421f-bed6-f72a8bb45e1d","Type":"ContainerDied","Data":"d355316426a1db688b7e0f637002731b78bea683453439286ba724dcfa414dc2"} Feb 21 08:45:01 crc kubenswrapper[4820]: I0221 08:45:01.768726 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" event={"ID":"e44294e9-1a1b-421f-bed6-f72a8bb45e1d","Type":"ContainerStarted","Data":"46758fe39c9534dfe81354a9889199d074af41843e3d4382ff1a479d09568995"} Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.100580 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.211063 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-config-volume\") pod \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.211662 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-secret-volume\") pod \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.211890 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-config-volume" (OuterVolumeSpecName: "config-volume") pod "e44294e9-1a1b-421f-bed6-f72a8bb45e1d" (UID: "e44294e9-1a1b-421f-bed6-f72a8bb45e1d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.212431 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfrhz\" (UniqueName: \"kubernetes.io/projected/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-kube-api-access-lfrhz\") pod \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.214318 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.218407 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-kube-api-access-lfrhz" (OuterVolumeSpecName: "kube-api-access-lfrhz") pod "e44294e9-1a1b-421f-bed6-f72a8bb45e1d" (UID: "e44294e9-1a1b-421f-bed6-f72a8bb45e1d"). InnerVolumeSpecName "kube-api-access-lfrhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.218422 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e44294e9-1a1b-421f-bed6-f72a8bb45e1d" (UID: "e44294e9-1a1b-421f-bed6-f72a8bb45e1d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.316661 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.316704 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfrhz\" (UniqueName: \"kubernetes.io/projected/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-kube-api-access-lfrhz\") on node \"crc\" DevicePath \"\"" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.791308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" event={"ID":"e44294e9-1a1b-421f-bed6-f72a8bb45e1d","Type":"ContainerDied","Data":"46758fe39c9534dfe81354a9889199d074af41843e3d4382ff1a479d09568995"} Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.791660 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46758fe39c9534dfe81354a9889199d074af41843e3d4382ff1a479d09568995" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.791566 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:04 crc kubenswrapper[4820]: I0221 08:45:04.194016 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5"] Feb 21 08:45:04 crc kubenswrapper[4820]: I0221 08:45:04.203032 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5"] Feb 21 08:45:05 crc kubenswrapper[4820]: I0221 08:45:05.708810 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" path="/var/lib/kubelet/pods/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1/volumes" Feb 21 08:45:36 crc kubenswrapper[4820]: I0221 08:45:36.739965 4820 scope.go:117] "RemoveContainer" containerID="2044ba44e2360265584b1f1c99572b402737919ae46c5dc3430e7ebdb548610f" Feb 21 08:46:39 crc kubenswrapper[4820]: I0221 08:46:39.759583 4820 generic.go:334] "Generic (PLEG): container finished" podID="26d06bf4-eb66-4688-a6ba-292af8a3b9f5" containerID="1d4f0c693534689f47073c82a3834b39e5d861ea6be879a690c33b30f6e2157b" exitCode=0 Feb 21 08:46:39 crc kubenswrapper[4820]: I0221 08:46:39.759626 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" event={"ID":"26d06bf4-eb66-4688-a6ba-292af8a3b9f5","Type":"ContainerDied","Data":"1d4f0c693534689f47073c82a3834b39e5d861ea6be879a690c33b30f6e2157b"} Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.240220 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.354707 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-ssh-key-openstack-cell1\") pod \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.354854 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5d9l\" (UniqueName: \"kubernetes.io/projected/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-kube-api-access-z5d9l\") pod \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.354909 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-inventory\") pod \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.361006 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-kube-api-access-z5d9l" (OuterVolumeSpecName: "kube-api-access-z5d9l") pod "26d06bf4-eb66-4688-a6ba-292af8a3b9f5" (UID: "26d06bf4-eb66-4688-a6ba-292af8a3b9f5"). InnerVolumeSpecName "kube-api-access-z5d9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.386200 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-inventory" (OuterVolumeSpecName: "inventory") pod "26d06bf4-eb66-4688-a6ba-292af8a3b9f5" (UID: "26d06bf4-eb66-4688-a6ba-292af8a3b9f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.405518 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "26d06bf4-eb66-4688-a6ba-292af8a3b9f5" (UID: "26d06bf4-eb66-4688-a6ba-292af8a3b9f5"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.457185 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.457259 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5d9l\" (UniqueName: \"kubernetes.io/projected/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-kube-api-access-z5d9l\") on node \"crc\" DevicePath \"\"" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.457278 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.778682 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" event={"ID":"26d06bf4-eb66-4688-a6ba-292af8a3b9f5","Type":"ContainerDied","Data":"d304e4893e4a4885f85f938002c8c75a860b24a24145fb223319d0af26918d18"} Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.778722 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d304e4893e4a4885f85f938002c8c75a860b24a24145fb223319d0af26918d18" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.778735 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.870670 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-hs6l2"] Feb 21 08:46:41 crc kubenswrapper[4820]: E0221 08:46:41.871154 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d06bf4-eb66-4688-a6ba-292af8a3b9f5" containerName="download-cache-openstack-openstack-cell1" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.871165 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d06bf4-eb66-4688-a6ba-292af8a3b9f5" containerName="download-cache-openstack-openstack-cell1" Feb 21 08:46:41 crc kubenswrapper[4820]: E0221 08:46:41.871195 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e44294e9-1a1b-421f-bed6-f72a8bb45e1d" containerName="collect-profiles" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.871201 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44294e9-1a1b-421f-bed6-f72a8bb45e1d" containerName="collect-profiles" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.871400 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="26d06bf4-eb66-4688-a6ba-292af8a3b9f5" containerName="download-cache-openstack-openstack-cell1" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.871427 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e44294e9-1a1b-421f-bed6-f72a8bb45e1d" containerName="collect-profiles" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.873132 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.875393 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.876824 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.877330 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.878185 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.888863 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-hs6l2"] Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.967606 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-inventory\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.967687 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5cr4\" (UniqueName: \"kubernetes.io/projected/979ca93e-175b-4fde-b503-0be2b59e1a99-kube-api-access-x5cr4\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.967748 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.069940 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.070075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-inventory\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.070128 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5cr4\" (UniqueName: \"kubernetes.io/projected/979ca93e-175b-4fde-b503-0be2b59e1a99-kube-api-access-x5cr4\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.074793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.075089 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-inventory\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.087469 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5cr4\" (UniqueName: \"kubernetes.io/projected/979ca93e-175b-4fde-b503-0be2b59e1a99-kube-api-access-x5cr4\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.197157 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.762001 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-hs6l2"] Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.799004 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" event={"ID":"979ca93e-175b-4fde-b503-0be2b59e1a99","Type":"ContainerStarted","Data":"0626e3e35aa773bddeddff1597472bda541b62311bf3ef021539021b08131634"} Feb 21 08:46:43 crc kubenswrapper[4820]: I0221 08:46:43.824931 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:46:43 crc kubenswrapper[4820]: I0221 08:46:43.825327 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:46:43 crc kubenswrapper[4820]: I0221 08:46:43.845663 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" event={"ID":"979ca93e-175b-4fde-b503-0be2b59e1a99","Type":"ContainerStarted","Data":"20ee5d0e5e3482aa3c105d1b9a82837c1f018d6d0c88b9b65f165c4065adfb26"} Feb 21 08:46:43 crc kubenswrapper[4820]: I0221 08:46:43.876506 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" podStartSLOduration=2.477238859 podStartE2EDuration="2.876486043s" podCreationTimestamp="2026-02-21 08:46:41 +0000 UTC" firstStartedPulling="2026-02-21 08:46:42.775076909 +0000 UTC m=+7177.808161107" lastFinishedPulling="2026-02-21 08:46:43.174324093 +0000 UTC m=+7178.207408291" observedRunningTime="2026-02-21 08:46:43.865413832 +0000 UTC m=+7178.898498050" watchObservedRunningTime="2026-02-21 08:46:43.876486043 +0000 UTC m=+7178.909570241" Feb 21 08:47:13 crc kubenswrapper[4820]: I0221 08:47:13.816159 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:47:13 crc kubenswrapper[4820]: I0221 08:47:13.816840 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:47:43 crc kubenswrapper[4820]: I0221 08:47:43.816535 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:47:43 crc kubenswrapper[4820]: I0221 08:47:43.817256 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:47:43 crc kubenswrapper[4820]: I0221 08:47:43.817359 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:47:43 crc kubenswrapper[4820]: I0221 08:47:43.819029 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e52876a5d65e4dbfbc3bbb405ed2e1fc047a888cbb3cf03140368de5b7b9380"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:47:43 crc kubenswrapper[4820]: I0221 08:47:43.819136 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://1e52876a5d65e4dbfbc3bbb405ed2e1fc047a888cbb3cf03140368de5b7b9380" gracePeriod=600 Feb 21 08:47:44 crc kubenswrapper[4820]: I0221 08:47:44.362694 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="1e52876a5d65e4dbfbc3bbb405ed2e1fc047a888cbb3cf03140368de5b7b9380" exitCode=0 Feb 21 08:47:44 crc kubenswrapper[4820]: I0221 08:47:44.362780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"1e52876a5d65e4dbfbc3bbb405ed2e1fc047a888cbb3cf03140368de5b7b9380"} Feb 21 08:47:44 crc kubenswrapper[4820]: I0221 08:47:44.363102 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5"} Feb 21 08:47:44 crc kubenswrapper[4820]: I0221 08:47:44.363128 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:48:04 crc kubenswrapper[4820]: I0221 08:48:04.527978 4820 generic.go:334] "Generic (PLEG): container finished" podID="979ca93e-175b-4fde-b503-0be2b59e1a99" containerID="20ee5d0e5e3482aa3c105d1b9a82837c1f018d6d0c88b9b65f165c4065adfb26" exitCode=0 Feb 21 08:48:04 crc kubenswrapper[4820]: I0221 08:48:04.528094 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" event={"ID":"979ca93e-175b-4fde-b503-0be2b59e1a99","Type":"ContainerDied","Data":"20ee5d0e5e3482aa3c105d1b9a82837c1f018d6d0c88b9b65f165c4065adfb26"} Feb 21 08:48:05 crc kubenswrapper[4820]: I0221 08:48:05.948399 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.046604 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5cr4\" (UniqueName: \"kubernetes.io/projected/979ca93e-175b-4fde-b503-0be2b59e1a99-kube-api-access-x5cr4\") pod \"979ca93e-175b-4fde-b503-0be2b59e1a99\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.047553 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-ssh-key-openstack-cell1\") pod \"979ca93e-175b-4fde-b503-0be2b59e1a99\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.047856 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-inventory\") pod \"979ca93e-175b-4fde-b503-0be2b59e1a99\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.052734 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/979ca93e-175b-4fde-b503-0be2b59e1a99-kube-api-access-x5cr4" (OuterVolumeSpecName: "kube-api-access-x5cr4") pod "979ca93e-175b-4fde-b503-0be2b59e1a99" (UID: "979ca93e-175b-4fde-b503-0be2b59e1a99"). InnerVolumeSpecName "kube-api-access-x5cr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.077852 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "979ca93e-175b-4fde-b503-0be2b59e1a99" (UID: "979ca93e-175b-4fde-b503-0be2b59e1a99"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.079228 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-inventory" (OuterVolumeSpecName: "inventory") pod "979ca93e-175b-4fde-b503-0be2b59e1a99" (UID: "979ca93e-175b-4fde-b503-0be2b59e1a99"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.151254 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.151295 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5cr4\" (UniqueName: \"kubernetes.io/projected/979ca93e-175b-4fde-b503-0be2b59e1a99-kube-api-access-x5cr4\") on node \"crc\" DevicePath \"\"" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.151306 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.545667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" event={"ID":"979ca93e-175b-4fde-b503-0be2b59e1a99","Type":"ContainerDied","Data":"0626e3e35aa773bddeddff1597472bda541b62311bf3ef021539021b08131634"} Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.545712 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0626e3e35aa773bddeddff1597472bda541b62311bf3ef021539021b08131634" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.545715 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.632935 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wn9jn"] Feb 21 08:48:06 crc kubenswrapper[4820]: E0221 08:48:06.634142 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979ca93e-175b-4fde-b503-0be2b59e1a99" containerName="configure-network-openstack-openstack-cell1" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.634170 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="979ca93e-175b-4fde-b503-0be2b59e1a99" containerName="configure-network-openstack-openstack-cell1" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.634482 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="979ca93e-175b-4fde-b503-0be2b59e1a99" containerName="configure-network-openstack-openstack-cell1" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.635377 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.645308 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wn9jn"] Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.646277 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.650841 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.650948 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.651651 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.660815 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmhm9\" (UniqueName: \"kubernetes.io/projected/15b9de10-7535-4310-9681-2d0171fb4376-kube-api-access-vmhm9\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.660889 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-inventory\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.661005 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.764371 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmhm9\" (UniqueName: \"kubernetes.io/projected/15b9de10-7535-4310-9681-2d0171fb4376-kube-api-access-vmhm9\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.764822 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-inventory\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.764945 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.782867 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-inventory\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.783551 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.787231 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmhm9\" (UniqueName: \"kubernetes.io/projected/15b9de10-7535-4310-9681-2d0171fb4376-kube-api-access-vmhm9\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.954822 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:07 crc kubenswrapper[4820]: I0221 08:48:07.445088 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wn9jn"] Feb 21 08:48:07 crc kubenswrapper[4820]: I0221 08:48:07.554091 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" event={"ID":"15b9de10-7535-4310-9681-2d0171fb4376","Type":"ContainerStarted","Data":"2eddcb66d437c45ef856273c0eff221fd08e4e75074809b6014ffeabb08633fc"} Feb 21 08:48:08 crc kubenswrapper[4820]: I0221 08:48:08.566173 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" event={"ID":"15b9de10-7535-4310-9681-2d0171fb4376","Type":"ContainerStarted","Data":"4a2ac9ecaa31b83ea0c818951ff0576244db3281516c0c93689b4103b96d80e3"} Feb 21 08:48:08 crc kubenswrapper[4820]: I0221 08:48:08.586430 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" podStartSLOduration=1.9522718110000001 podStartE2EDuration="2.586411624s" podCreationTimestamp="2026-02-21 08:48:06 +0000 UTC" firstStartedPulling="2026-02-21 08:48:07.450040841 +0000 UTC m=+7262.483125039" lastFinishedPulling="2026-02-21 08:48:08.084180654 +0000 UTC m=+7263.117264852" observedRunningTime="2026-02-21 08:48:08.585282693 +0000 UTC m=+7263.618366891" watchObservedRunningTime="2026-02-21 08:48:08.586411624 +0000 UTC m=+7263.619495822" Feb 21 08:48:13 crc kubenswrapper[4820]: I0221 08:48:13.625455 4820 generic.go:334] "Generic (PLEG): container finished" podID="15b9de10-7535-4310-9681-2d0171fb4376" containerID="4a2ac9ecaa31b83ea0c818951ff0576244db3281516c0c93689b4103b96d80e3" exitCode=0 Feb 21 08:48:13 crc kubenswrapper[4820]: I0221 08:48:13.625561 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" event={"ID":"15b9de10-7535-4310-9681-2d0171fb4376","Type":"ContainerDied","Data":"4a2ac9ecaa31b83ea0c818951ff0576244db3281516c0c93689b4103b96d80e3"} Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.062923 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.237122 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-inventory\") pod \"15b9de10-7535-4310-9681-2d0171fb4376\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.237227 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmhm9\" (UniqueName: \"kubernetes.io/projected/15b9de10-7535-4310-9681-2d0171fb4376-kube-api-access-vmhm9\") pod \"15b9de10-7535-4310-9681-2d0171fb4376\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.237634 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-ssh-key-openstack-cell1\") pod \"15b9de10-7535-4310-9681-2d0171fb4376\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.242458 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15b9de10-7535-4310-9681-2d0171fb4376-kube-api-access-vmhm9" (OuterVolumeSpecName: "kube-api-access-vmhm9") pod "15b9de10-7535-4310-9681-2d0171fb4376" (UID: "15b9de10-7535-4310-9681-2d0171fb4376"). InnerVolumeSpecName "kube-api-access-vmhm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.264834 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "15b9de10-7535-4310-9681-2d0171fb4376" (UID: "15b9de10-7535-4310-9681-2d0171fb4376"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.279299 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-inventory" (OuterVolumeSpecName: "inventory") pod "15b9de10-7535-4310-9681-2d0171fb4376" (UID: "15b9de10-7535-4310-9681-2d0171fb4376"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.341145 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.342012 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.342230 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmhm9\" (UniqueName: \"kubernetes.io/projected/15b9de10-7535-4310-9681-2d0171fb4376-kube-api-access-vmhm9\") on node \"crc\" DevicePath \"\"" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.648383 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" event={"ID":"15b9de10-7535-4310-9681-2d0171fb4376","Type":"ContainerDied","Data":"2eddcb66d437c45ef856273c0eff221fd08e4e75074809b6014ffeabb08633fc"} Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.648428 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eddcb66d437c45ef856273c0eff221fd08e4e75074809b6014ffeabb08633fc" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.648449 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.720989 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-79fjr"] Feb 21 08:48:15 crc kubenswrapper[4820]: E0221 08:48:15.721468 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b9de10-7535-4310-9681-2d0171fb4376" containerName="validate-network-openstack-openstack-cell1" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.721487 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b9de10-7535-4310-9681-2d0171fb4376" containerName="validate-network-openstack-openstack-cell1" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.721670 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b9de10-7535-4310-9681-2d0171fb4376" containerName="validate-network-openstack-openstack-cell1" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.722419 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.725633 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.725955 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.726558 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.726640 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.740391 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-79fjr"] Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.887495 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcd5t\" (UniqueName: \"kubernetes.io/projected/8f2548bf-793b-464b-9659-2962669f353e-kube-api-access-wcd5t\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.887604 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.887666 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-inventory\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.989063 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-inventory\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.989194 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcd5t\" (UniqueName: \"kubernetes.io/projected/8f2548bf-793b-464b-9659-2962669f353e-kube-api-access-wcd5t\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.989284 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.993132 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-inventory\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.994021 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:16 crc kubenswrapper[4820]: I0221 08:48:16.008041 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcd5t\" (UniqueName: \"kubernetes.io/projected/8f2548bf-793b-464b-9659-2962669f353e-kube-api-access-wcd5t\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:16 crc kubenswrapper[4820]: I0221 08:48:16.049480 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:16 crc kubenswrapper[4820]: I0221 08:48:16.577545 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-79fjr"] Feb 21 08:48:16 crc kubenswrapper[4820]: I0221 08:48:16.659354 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-79fjr" event={"ID":"8f2548bf-793b-464b-9659-2962669f353e","Type":"ContainerStarted","Data":"41a04bba9543a00bbd50c7dee3ccd6277a5fb10cd6b514d8903b92b7bb9d627f"} Feb 21 08:48:17 crc kubenswrapper[4820]: I0221 08:48:17.671057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-79fjr" event={"ID":"8f2548bf-793b-464b-9659-2962669f353e","Type":"ContainerStarted","Data":"66e80841080b53e2e1c62ade5f863a181bd7fdddca1e53ca515fd528a4e40c3a"} Feb 21 08:48:17 crc kubenswrapper[4820]: I0221 08:48:17.696883 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-79fjr" podStartSLOduration=2.306038769 podStartE2EDuration="2.696861825s" podCreationTimestamp="2026-02-21 08:48:15 +0000 UTC" firstStartedPulling="2026-02-21 08:48:16.584296667 +0000 UTC m=+7271.617380865" lastFinishedPulling="2026-02-21 08:48:16.975119723 +0000 UTC m=+7272.008203921" observedRunningTime="2026-02-21 08:48:17.690727089 +0000 UTC m=+7272.723811287" watchObservedRunningTime="2026-02-21 08:48:17.696861825 +0000 UTC m=+7272.729946033" Feb 21 08:49:01 crc kubenswrapper[4820]: I0221 08:49:01.103506 4820 generic.go:334] "Generic (PLEG): container finished" podID="8f2548bf-793b-464b-9659-2962669f353e" containerID="66e80841080b53e2e1c62ade5f863a181bd7fdddca1e53ca515fd528a4e40c3a" exitCode=0 Feb 21 08:49:01 crc kubenswrapper[4820]: I0221 08:49:01.103638 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-79fjr" event={"ID":"8f2548bf-793b-464b-9659-2962669f353e","Type":"ContainerDied","Data":"66e80841080b53e2e1c62ade5f863a181bd7fdddca1e53ca515fd528a4e40c3a"} Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.514093 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.639759 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcd5t\" (UniqueName: \"kubernetes.io/projected/8f2548bf-793b-464b-9659-2962669f353e-kube-api-access-wcd5t\") pod \"8f2548bf-793b-464b-9659-2962669f353e\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.640004 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-inventory\") pod \"8f2548bf-793b-464b-9659-2962669f353e\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.640117 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-ssh-key-openstack-cell1\") pod \"8f2548bf-793b-464b-9659-2962669f353e\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.646032 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f2548bf-793b-464b-9659-2962669f353e-kube-api-access-wcd5t" (OuterVolumeSpecName: "kube-api-access-wcd5t") pod "8f2548bf-793b-464b-9659-2962669f353e" (UID: "8f2548bf-793b-464b-9659-2962669f353e"). InnerVolumeSpecName "kube-api-access-wcd5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.668602 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "8f2548bf-793b-464b-9659-2962669f353e" (UID: "8f2548bf-793b-464b-9659-2962669f353e"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.669831 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-inventory" (OuterVolumeSpecName: "inventory") pod "8f2548bf-793b-464b-9659-2962669f353e" (UID: "8f2548bf-793b-464b-9659-2962669f353e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.742716 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.742762 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.742773 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcd5t\" (UniqueName: \"kubernetes.io/projected/8f2548bf-793b-464b-9659-2962669f353e-kube-api-access-wcd5t\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.122983 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-79fjr" event={"ID":"8f2548bf-793b-464b-9659-2962669f353e","Type":"ContainerDied","Data":"41a04bba9543a00bbd50c7dee3ccd6277a5fb10cd6b514d8903b92b7bb9d627f"} Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.123022 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41a04bba9543a00bbd50c7dee3ccd6277a5fb10cd6b514d8903b92b7bb9d627f" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.123026 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.203390 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-cggfb"] Feb 21 08:49:03 crc kubenswrapper[4820]: E0221 08:49:03.203884 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2548bf-793b-464b-9659-2962669f353e" containerName="install-os-openstack-openstack-cell1" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.203910 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2548bf-793b-464b-9659-2962669f353e" containerName="install-os-openstack-openstack-cell1" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.204116 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2548bf-793b-464b-9659-2962669f353e" containerName="install-os-openstack-openstack-cell1" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.205330 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.207539 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.207585 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.207724 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.207842 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.218742 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-cggfb"] Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.354063 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-inventory\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.354153 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.354575 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xqzx\" (UniqueName: \"kubernetes.io/projected/ceace068-0023-4d48-b24d-30cafb14db01-kube-api-access-9xqzx\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.457046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-inventory\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.457110 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.457219 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xqzx\" (UniqueName: \"kubernetes.io/projected/ceace068-0023-4d48-b24d-30cafb14db01-kube-api-access-9xqzx\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.462334 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.462936 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-inventory\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.474910 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xqzx\" (UniqueName: \"kubernetes.io/projected/ceace068-0023-4d48-b24d-30cafb14db01-kube-api-access-9xqzx\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.528519 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:04 crc kubenswrapper[4820]: I0221 08:49:04.070566 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-cggfb"] Feb 21 08:49:04 crc kubenswrapper[4820]: I0221 08:49:04.137790 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" event={"ID":"ceace068-0023-4d48-b24d-30cafb14db01","Type":"ContainerStarted","Data":"d3ad9c556fb047894c6a4ef09593a61fbc7b6753b33612f81c77baf66c0b7529"} Feb 21 08:49:05 crc kubenswrapper[4820]: I0221 08:49:05.147703 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" event={"ID":"ceace068-0023-4d48-b24d-30cafb14db01","Type":"ContainerStarted","Data":"0634270323fa664eb00a226f803413936e00355a17c7245073d0bfb6257eeeb1"} Feb 21 08:49:05 crc kubenswrapper[4820]: I0221 08:49:05.188743 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" podStartSLOduration=1.7963288290000001 podStartE2EDuration="2.188723046s" podCreationTimestamp="2026-02-21 08:49:03 +0000 UTC" firstStartedPulling="2026-02-21 08:49:04.073780454 +0000 UTC m=+7319.106864662" lastFinishedPulling="2026-02-21 08:49:04.466174671 +0000 UTC m=+7319.499258879" observedRunningTime="2026-02-21 08:49:05.181199742 +0000 UTC m=+7320.214283930" watchObservedRunningTime="2026-02-21 08:49:05.188723046 +0000 UTC m=+7320.221807244" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.781261 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-926lq"] Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.783865 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.797030 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-926lq"] Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.830765 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-utilities\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.831294 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6n54\" (UniqueName: \"kubernetes.io/projected/3596f53c-dfdd-4e87-95db-35af3c55ea47-kube-api-access-r6n54\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.831399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-catalog-content\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.933849 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-utilities\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.934110 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6n54\" (UniqueName: \"kubernetes.io/projected/3596f53c-dfdd-4e87-95db-35af3c55ea47-kube-api-access-r6n54\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.934204 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-catalog-content\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.934795 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-utilities\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.935035 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-catalog-content\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.953349 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6n54\" (UniqueName: \"kubernetes.io/projected/3596f53c-dfdd-4e87-95db-35af3c55ea47-kube-api-access-r6n54\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:22 crc kubenswrapper[4820]: I0221 08:49:22.109969 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:22 crc kubenswrapper[4820]: I0221 08:49:22.612333 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-926lq"] Feb 21 08:49:23 crc kubenswrapper[4820]: I0221 08:49:23.326879 4820 generic.go:334] "Generic (PLEG): container finished" podID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerID="d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f" exitCode=0 Feb 21 08:49:23 crc kubenswrapper[4820]: I0221 08:49:23.326976 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-926lq" event={"ID":"3596f53c-dfdd-4e87-95db-35af3c55ea47","Type":"ContainerDied","Data":"d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f"} Feb 21 08:49:23 crc kubenswrapper[4820]: I0221 08:49:23.327192 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-926lq" event={"ID":"3596f53c-dfdd-4e87-95db-35af3c55ea47","Type":"ContainerStarted","Data":"d7ed5ad8e0089012d5d820d3260cef995279ba3d115ae6ade6cca75392a8e9c5"} Feb 21 08:49:24 crc kubenswrapper[4820]: I0221 08:49:24.338399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-926lq" event={"ID":"3596f53c-dfdd-4e87-95db-35af3c55ea47","Type":"ContainerStarted","Data":"460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565"} Feb 21 08:49:26 crc kubenswrapper[4820]: I0221 08:49:26.359127 4820 generic.go:334] "Generic (PLEG): container finished" podID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerID="460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565" exitCode=0 Feb 21 08:49:26 crc kubenswrapper[4820]: I0221 08:49:26.359202 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-926lq" event={"ID":"3596f53c-dfdd-4e87-95db-35af3c55ea47","Type":"ContainerDied","Data":"460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565"} Feb 21 08:49:27 crc kubenswrapper[4820]: I0221 08:49:27.370523 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-926lq" event={"ID":"3596f53c-dfdd-4e87-95db-35af3c55ea47","Type":"ContainerStarted","Data":"89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a"} Feb 21 08:49:27 crc kubenswrapper[4820]: I0221 08:49:27.388053 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-926lq" podStartSLOduration=2.990070545 podStartE2EDuration="6.388034935s" podCreationTimestamp="2026-02-21 08:49:21 +0000 UTC" firstStartedPulling="2026-02-21 08:49:23.328714243 +0000 UTC m=+7338.361798441" lastFinishedPulling="2026-02-21 08:49:26.726678633 +0000 UTC m=+7341.759762831" observedRunningTime="2026-02-21 08:49:27.385625821 +0000 UTC m=+7342.418710039" watchObservedRunningTime="2026-02-21 08:49:27.388034935 +0000 UTC m=+7342.421119143" Feb 21 08:49:32 crc kubenswrapper[4820]: I0221 08:49:32.110925 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:32 crc kubenswrapper[4820]: I0221 08:49:32.111219 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:32 crc kubenswrapper[4820]: I0221 08:49:32.158899 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:32 crc kubenswrapper[4820]: I0221 08:49:32.487531 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:32 crc kubenswrapper[4820]: I0221 08:49:32.542428 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-926lq"] Feb 21 08:49:34 crc kubenswrapper[4820]: I0221 08:49:34.441678 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-926lq" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="registry-server" containerID="cri-o://89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a" gracePeriod=2 Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.038097 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.118754 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6n54\" (UniqueName: \"kubernetes.io/projected/3596f53c-dfdd-4e87-95db-35af3c55ea47-kube-api-access-r6n54\") pod \"3596f53c-dfdd-4e87-95db-35af3c55ea47\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.118844 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-utilities\") pod \"3596f53c-dfdd-4e87-95db-35af3c55ea47\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.118865 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-catalog-content\") pod \"3596f53c-dfdd-4e87-95db-35af3c55ea47\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.120319 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-utilities" (OuterVolumeSpecName: "utilities") pod "3596f53c-dfdd-4e87-95db-35af3c55ea47" (UID: "3596f53c-dfdd-4e87-95db-35af3c55ea47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.125514 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3596f53c-dfdd-4e87-95db-35af3c55ea47-kube-api-access-r6n54" (OuterVolumeSpecName: "kube-api-access-r6n54") pod "3596f53c-dfdd-4e87-95db-35af3c55ea47" (UID: "3596f53c-dfdd-4e87-95db-35af3c55ea47"). InnerVolumeSpecName "kube-api-access-r6n54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.175266 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3596f53c-dfdd-4e87-95db-35af3c55ea47" (UID: "3596f53c-dfdd-4e87-95db-35af3c55ea47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.220924 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6n54\" (UniqueName: \"kubernetes.io/projected/3596f53c-dfdd-4e87-95db-35af3c55ea47-kube-api-access-r6n54\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.221169 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.221183 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.454113 4820 generic.go:334] "Generic (PLEG): container finished" podID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerID="89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a" exitCode=0 Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.454176 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-926lq" event={"ID":"3596f53c-dfdd-4e87-95db-35af3c55ea47","Type":"ContainerDied","Data":"89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a"} Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.454218 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.454279 4820 scope.go:117] "RemoveContainer" containerID="89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.454232 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-926lq" event={"ID":"3596f53c-dfdd-4e87-95db-35af3c55ea47","Type":"ContainerDied","Data":"d7ed5ad8e0089012d5d820d3260cef995279ba3d115ae6ade6cca75392a8e9c5"} Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.479992 4820 scope.go:117] "RemoveContainer" containerID="460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.504538 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-926lq"] Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.513637 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-926lq"] Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.523564 4820 scope.go:117] "RemoveContainer" containerID="d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.557585 4820 scope.go:117] "RemoveContainer" containerID="89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a" Feb 21 08:49:35 crc kubenswrapper[4820]: E0221 08:49:35.560867 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a\": container with ID starting with 89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a not found: ID does not exist" containerID="89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.560919 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a"} err="failed to get container status \"89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a\": rpc error: code = NotFound desc = could not find container \"89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a\": container with ID starting with 89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a not found: ID does not exist" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.560950 4820 scope.go:117] "RemoveContainer" containerID="460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565" Feb 21 08:49:35 crc kubenswrapper[4820]: E0221 08:49:35.561645 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565\": container with ID starting with 460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565 not found: ID does not exist" containerID="460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.561773 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565"} err="failed to get container status \"460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565\": rpc error: code = NotFound desc = could not find container \"460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565\": container with ID starting with 460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565 not found: ID does not exist" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.561789 4820 scope.go:117] "RemoveContainer" containerID="d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f" Feb 21 08:49:35 crc kubenswrapper[4820]: E0221 08:49:35.562089 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f\": container with ID starting with d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f not found: ID does not exist" containerID="d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.562115 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f"} err="failed to get container status \"d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f\": rpc error: code = NotFound desc = could not find container \"d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f\": container with ID starting with d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f not found: ID does not exist" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.712642 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" path="/var/lib/kubelet/pods/3596f53c-dfdd-4e87-95db-35af3c55ea47/volumes" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.805422 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-56bw7"] Feb 21 08:49:37 crc kubenswrapper[4820]: E0221 08:49:37.806376 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="registry-server" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.806390 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="registry-server" Feb 21 08:49:37 crc kubenswrapper[4820]: E0221 08:49:37.806426 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="extract-content" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.806432 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="extract-content" Feb 21 08:49:37 crc kubenswrapper[4820]: E0221 08:49:37.806445 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="extract-utilities" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.806450 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="extract-utilities" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.806658 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="registry-server" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.808292 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.819218 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56bw7"] Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.877655 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-746mg\" (UniqueName: \"kubernetes.io/projected/46eb670c-2901-4efd-b628-bbc1e5c02c60-kube-api-access-746mg\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.877879 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-catalog-content\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.877917 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-utilities\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.980377 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-catalog-content\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.980736 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-utilities\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.980964 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-746mg\" (UniqueName: \"kubernetes.io/projected/46eb670c-2901-4efd-b628-bbc1e5c02c60-kube-api-access-746mg\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.980985 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-catalog-content\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.981414 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-utilities\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:38 crc kubenswrapper[4820]: I0221 08:49:38.003432 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-746mg\" (UniqueName: \"kubernetes.io/projected/46eb670c-2901-4efd-b628-bbc1e5c02c60-kube-api-access-746mg\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:38 crc kubenswrapper[4820]: I0221 08:49:38.147082 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:38 crc kubenswrapper[4820]: I0221 08:49:38.628057 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56bw7"] Feb 21 08:49:39 crc kubenswrapper[4820]: I0221 08:49:39.491066 4820 generic.go:334] "Generic (PLEG): container finished" podID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerID="6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430" exitCode=0 Feb 21 08:49:39 crc kubenswrapper[4820]: I0221 08:49:39.491117 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56bw7" event={"ID":"46eb670c-2901-4efd-b628-bbc1e5c02c60","Type":"ContainerDied","Data":"6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430"} Feb 21 08:49:39 crc kubenswrapper[4820]: I0221 08:49:39.491171 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56bw7" event={"ID":"46eb670c-2901-4efd-b628-bbc1e5c02c60","Type":"ContainerStarted","Data":"22fb8a14a5b24ef65062617fea32231c70d149b5d219d695fafaa29193fe4716"} Feb 21 08:49:41 crc kubenswrapper[4820]: I0221 08:49:41.510780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56bw7" event={"ID":"46eb670c-2901-4efd-b628-bbc1e5c02c60","Type":"ContainerStarted","Data":"9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed"} Feb 21 08:49:49 crc kubenswrapper[4820]: I0221 08:49:49.572849 4820 generic.go:334] "Generic (PLEG): container finished" podID="ceace068-0023-4d48-b24d-30cafb14db01" containerID="0634270323fa664eb00a226f803413936e00355a17c7245073d0bfb6257eeeb1" exitCode=0 Feb 21 08:49:49 crc kubenswrapper[4820]: I0221 08:49:49.572930 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" event={"ID":"ceace068-0023-4d48-b24d-30cafb14db01","Type":"ContainerDied","Data":"0634270323fa664eb00a226f803413936e00355a17c7245073d0bfb6257eeeb1"} Feb 21 08:49:49 crc kubenswrapper[4820]: I0221 08:49:49.578935 4820 generic.go:334] "Generic (PLEG): container finished" podID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerID="9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed" exitCode=0 Feb 21 08:49:49 crc kubenswrapper[4820]: I0221 08:49:49.578980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56bw7" event={"ID":"46eb670c-2901-4efd-b628-bbc1e5c02c60","Type":"ContainerDied","Data":"9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed"} Feb 21 08:49:49 crc kubenswrapper[4820]: I0221 08:49:49.583137 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:49:50 crc kubenswrapper[4820]: I0221 08:49:50.588218 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56bw7" event={"ID":"46eb670c-2901-4efd-b628-bbc1e5c02c60","Type":"ContainerStarted","Data":"9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894"} Feb 21 08:49:50 crc kubenswrapper[4820]: I0221 08:49:50.612477 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-56bw7" podStartSLOduration=3.143706142 podStartE2EDuration="13.612454336s" podCreationTimestamp="2026-02-21 08:49:37 +0000 UTC" firstStartedPulling="2026-02-21 08:49:39.495856627 +0000 UTC m=+7354.528940825" lastFinishedPulling="2026-02-21 08:49:49.964604821 +0000 UTC m=+7364.997689019" observedRunningTime="2026-02-21 08:49:50.609405763 +0000 UTC m=+7365.642489961" watchObservedRunningTime="2026-02-21 08:49:50.612454336 +0000 UTC m=+7365.645538534" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.023064 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.153503 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-ssh-key-openstack-cell1\") pod \"ceace068-0023-4d48-b24d-30cafb14db01\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.153673 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xqzx\" (UniqueName: \"kubernetes.io/projected/ceace068-0023-4d48-b24d-30cafb14db01-kube-api-access-9xqzx\") pod \"ceace068-0023-4d48-b24d-30cafb14db01\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.153730 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-inventory\") pod \"ceace068-0023-4d48-b24d-30cafb14db01\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.162203 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceace068-0023-4d48-b24d-30cafb14db01-kube-api-access-9xqzx" (OuterVolumeSpecName: "kube-api-access-9xqzx") pod "ceace068-0023-4d48-b24d-30cafb14db01" (UID: "ceace068-0023-4d48-b24d-30cafb14db01"). InnerVolumeSpecName "kube-api-access-9xqzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.195923 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-inventory" (OuterVolumeSpecName: "inventory") pod "ceace068-0023-4d48-b24d-30cafb14db01" (UID: "ceace068-0023-4d48-b24d-30cafb14db01"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.197051 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ceace068-0023-4d48-b24d-30cafb14db01" (UID: "ceace068-0023-4d48-b24d-30cafb14db01"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.256530 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.256563 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xqzx\" (UniqueName: \"kubernetes.io/projected/ceace068-0023-4d48-b24d-30cafb14db01-kube-api-access-9xqzx\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.256573 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.598076 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" event={"ID":"ceace068-0023-4d48-b24d-30cafb14db01","Type":"ContainerDied","Data":"d3ad9c556fb047894c6a4ef09593a61fbc7b6753b33612f81c77baf66c0b7529"} Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.598120 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3ad9c556fb047894c6a4ef09593a61fbc7b6753b33612f81c77baf66c0b7529" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.598143 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.676834 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-4pwnt"] Feb 21 08:49:51 crc kubenswrapper[4820]: E0221 08:49:51.677369 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceace068-0023-4d48-b24d-30cafb14db01" containerName="configure-os-openstack-openstack-cell1" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.677392 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceace068-0023-4d48-b24d-30cafb14db01" containerName="configure-os-openstack-openstack-cell1" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.677636 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceace068-0023-4d48-b24d-30cafb14db01" containerName="configure-os-openstack-openstack-cell1" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.678536 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.680322 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.680882 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.681083 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.685358 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.694411 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-4pwnt"] Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.766401 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-inventory-0\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.766586 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5x5h\" (UniqueName: \"kubernetes.io/projected/2090d99c-7240-49ef-85d8-187c0cd6c146-kube-api-access-k5x5h\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.766631 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.868461 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5x5h\" (UniqueName: \"kubernetes.io/projected/2090d99c-7240-49ef-85d8-187c0cd6c146-kube-api-access-k5x5h\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.868528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.868639 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-inventory-0\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.873596 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-inventory-0\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.883142 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.898610 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5x5h\" (UniqueName: \"kubernetes.io/projected/2090d99c-7240-49ef-85d8-187c0cd6c146-kube-api-access-k5x5h\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.998962 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:52 crc kubenswrapper[4820]: I0221 08:49:52.600758 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-4pwnt"] Feb 21 08:49:52 crc kubenswrapper[4820]: W0221 08:49:52.601164 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2090d99c_7240_49ef_85d8_187c0cd6c146.slice/crio-02f6591a0ed82e5e6b18be2bed8f80f23ff7b9289a30ba84cb3a3c16fc302f65 WatchSource:0}: Error finding container 02f6591a0ed82e5e6b18be2bed8f80f23ff7b9289a30ba84cb3a3c16fc302f65: Status 404 returned error can't find the container with id 02f6591a0ed82e5e6b18be2bed8f80f23ff7b9289a30ba84cb3a3c16fc302f65 Feb 21 08:49:53 crc kubenswrapper[4820]: I0221 08:49:53.617402 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-4pwnt" event={"ID":"2090d99c-7240-49ef-85d8-187c0cd6c146","Type":"ContainerStarted","Data":"3d1b766e377f20c92c3eb643731421d1cc02bbe68c1fd5c38d4a9c93b90d83fa"} Feb 21 08:49:53 crc kubenswrapper[4820]: I0221 08:49:53.617655 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-4pwnt" event={"ID":"2090d99c-7240-49ef-85d8-187c0cd6c146","Type":"ContainerStarted","Data":"02f6591a0ed82e5e6b18be2bed8f80f23ff7b9289a30ba84cb3a3c16fc302f65"} Feb 21 08:49:53 crc kubenswrapper[4820]: I0221 08:49:53.641452 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-4pwnt" podStartSLOduration=2.225744524 podStartE2EDuration="2.641425834s" podCreationTimestamp="2026-02-21 08:49:51 +0000 UTC" firstStartedPulling="2026-02-21 08:49:52.604148181 +0000 UTC m=+7367.637232379" lastFinishedPulling="2026-02-21 08:49:53.019829471 +0000 UTC m=+7368.052913689" observedRunningTime="2026-02-21 08:49:53.639377968 +0000 UTC m=+7368.672462196" watchObservedRunningTime="2026-02-21 08:49:53.641425834 +0000 UTC m=+7368.674510052" Feb 21 08:49:58 crc kubenswrapper[4820]: I0221 08:49:58.147380 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:58 crc kubenswrapper[4820]: I0221 08:49:58.147726 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:58 crc kubenswrapper[4820]: I0221 08:49:58.208657 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:58 crc kubenswrapper[4820]: I0221 08:49:58.721587 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:58 crc kubenswrapper[4820]: I0221 08:49:58.763719 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56bw7"] Feb 21 08:50:00 crc kubenswrapper[4820]: I0221 08:50:00.687119 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-56bw7" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="registry-server" containerID="cri-o://9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894" gracePeriod=2 Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.197824 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.262130 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-utilities\") pod \"46eb670c-2901-4efd-b628-bbc1e5c02c60\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.262324 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-746mg\" (UniqueName: \"kubernetes.io/projected/46eb670c-2901-4efd-b628-bbc1e5c02c60-kube-api-access-746mg\") pod \"46eb670c-2901-4efd-b628-bbc1e5c02c60\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.262370 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-catalog-content\") pod \"46eb670c-2901-4efd-b628-bbc1e5c02c60\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.263353 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-utilities" (OuterVolumeSpecName: "utilities") pod "46eb670c-2901-4efd-b628-bbc1e5c02c60" (UID: "46eb670c-2901-4efd-b628-bbc1e5c02c60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.269339 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46eb670c-2901-4efd-b628-bbc1e5c02c60-kube-api-access-746mg" (OuterVolumeSpecName: "kube-api-access-746mg") pod "46eb670c-2901-4efd-b628-bbc1e5c02c60" (UID: "46eb670c-2901-4efd-b628-bbc1e5c02c60"). InnerVolumeSpecName "kube-api-access-746mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.364917 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.364954 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-746mg\" (UniqueName: \"kubernetes.io/projected/46eb670c-2901-4efd-b628-bbc1e5c02c60-kube-api-access-746mg\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.396792 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46eb670c-2901-4efd-b628-bbc1e5c02c60" (UID: "46eb670c-2901-4efd-b628-bbc1e5c02c60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.466442 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.698837 4820 generic.go:334] "Generic (PLEG): container finished" podID="2090d99c-7240-49ef-85d8-187c0cd6c146" containerID="3d1b766e377f20c92c3eb643731421d1cc02bbe68c1fd5c38d4a9c93b90d83fa" exitCode=0 Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.701210 4820 generic.go:334] "Generic (PLEG): container finished" podID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerID="9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894" exitCode=0 Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.701357 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.707264 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-4pwnt" event={"ID":"2090d99c-7240-49ef-85d8-187c0cd6c146","Type":"ContainerDied","Data":"3d1b766e377f20c92c3eb643731421d1cc02bbe68c1fd5c38d4a9c93b90d83fa"} Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.707323 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56bw7" event={"ID":"46eb670c-2901-4efd-b628-bbc1e5c02c60","Type":"ContainerDied","Data":"9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894"} Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.707349 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56bw7" event={"ID":"46eb670c-2901-4efd-b628-bbc1e5c02c60","Type":"ContainerDied","Data":"22fb8a14a5b24ef65062617fea32231c70d149b5d219d695fafaa29193fe4716"} Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.707376 4820 scope.go:117] "RemoveContainer" containerID="9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.733097 4820 scope.go:117] "RemoveContainer" containerID="9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.756543 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56bw7"] Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.768990 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-56bw7"] Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.774757 4820 scope.go:117] "RemoveContainer" containerID="6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.807098 4820 scope.go:117] "RemoveContainer" containerID="9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894" Feb 21 08:50:01 crc kubenswrapper[4820]: E0221 08:50:01.807641 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894\": container with ID starting with 9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894 not found: ID does not exist" containerID="9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.807749 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894"} err="failed to get container status \"9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894\": rpc error: code = NotFound desc = could not find container \"9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894\": container with ID starting with 9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894 not found: ID does not exist" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.807785 4820 scope.go:117] "RemoveContainer" containerID="9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed" Feb 21 08:50:01 crc kubenswrapper[4820]: E0221 08:50:01.808172 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed\": container with ID starting with 9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed not found: ID does not exist" containerID="9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.808267 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed"} err="failed to get container status \"9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed\": rpc error: code = NotFound desc = could not find container \"9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed\": container with ID starting with 9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed not found: ID does not exist" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.808313 4820 scope.go:117] "RemoveContainer" containerID="6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430" Feb 21 08:50:01 crc kubenswrapper[4820]: E0221 08:50:01.808576 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430\": container with ID starting with 6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430 not found: ID does not exist" containerID="6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.808605 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430"} err="failed to get container status \"6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430\": rpc error: code = NotFound desc = could not find container \"6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430\": container with ID starting with 6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430 not found: ID does not exist" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.106304 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.200248 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-ssh-key-openstack-cell1\") pod \"2090d99c-7240-49ef-85d8-187c0cd6c146\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.200330 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5x5h\" (UniqueName: \"kubernetes.io/projected/2090d99c-7240-49ef-85d8-187c0cd6c146-kube-api-access-k5x5h\") pod \"2090d99c-7240-49ef-85d8-187c0cd6c146\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.200427 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-inventory-0\") pod \"2090d99c-7240-49ef-85d8-187c0cd6c146\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.205341 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2090d99c-7240-49ef-85d8-187c0cd6c146-kube-api-access-k5x5h" (OuterVolumeSpecName: "kube-api-access-k5x5h") pod "2090d99c-7240-49ef-85d8-187c0cd6c146" (UID: "2090d99c-7240-49ef-85d8-187c0cd6c146"). InnerVolumeSpecName "kube-api-access-k5x5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.231562 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "2090d99c-7240-49ef-85d8-187c0cd6c146" (UID: "2090d99c-7240-49ef-85d8-187c0cd6c146"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.233745 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "2090d99c-7240-49ef-85d8-187c0cd6c146" (UID: "2090d99c-7240-49ef-85d8-187c0cd6c146"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.302887 4820 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.302919 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.302928 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5x5h\" (UniqueName: \"kubernetes.io/projected/2090d99c-7240-49ef-85d8-187c0cd6c146-kube-api-access-k5x5h\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.707389 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" path="/var/lib/kubelet/pods/46eb670c-2901-4efd-b628-bbc1e5c02c60/volumes" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.724751 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-4pwnt" event={"ID":"2090d99c-7240-49ef-85d8-187c0cd6c146","Type":"ContainerDied","Data":"02f6591a0ed82e5e6b18be2bed8f80f23ff7b9289a30ba84cb3a3c16fc302f65"} Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.724793 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f6591a0ed82e5e6b18be2bed8f80f23ff7b9289a30ba84cb3a3c16fc302f65" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.724835 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.809631 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-57cnm"] Feb 21 08:50:03 crc kubenswrapper[4820]: E0221 08:50:03.810090 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="extract-content" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.810107 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="extract-content" Feb 21 08:50:03 crc kubenswrapper[4820]: E0221 08:50:03.810124 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="registry-server" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.810132 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="registry-server" Feb 21 08:50:03 crc kubenswrapper[4820]: E0221 08:50:03.810145 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2090d99c-7240-49ef-85d8-187c0cd6c146" containerName="ssh-known-hosts-openstack" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.810151 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2090d99c-7240-49ef-85d8-187c0cd6c146" containerName="ssh-known-hosts-openstack" Feb 21 08:50:03 crc kubenswrapper[4820]: E0221 08:50:03.810168 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="extract-utilities" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.810180 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="extract-utilities" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.810438 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="registry-server" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.810473 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2090d99c-7240-49ef-85d8-187c0cd6c146" containerName="ssh-known-hosts-openstack" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.811154 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.814654 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.817127 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.818338 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.818609 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.824138 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-57cnm"] Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.915328 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw5xd\" (UniqueName: \"kubernetes.io/projected/4ade5366-52be-4c8f-b9e2-1088b04caa90-kube-api-access-xw5xd\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.915508 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.915666 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-inventory\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.018474 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw5xd\" (UniqueName: \"kubernetes.io/projected/4ade5366-52be-4c8f-b9e2-1088b04caa90-kube-api-access-xw5xd\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.018561 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.018663 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-inventory\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.023415 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.024814 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-inventory\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.034860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw5xd\" (UniqueName: \"kubernetes.io/projected/4ade5366-52be-4c8f-b9e2-1088b04caa90-kube-api-access-xw5xd\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.146211 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.734006 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-57cnm"] Feb 21 08:50:04 crc kubenswrapper[4820]: W0221 08:50:04.739379 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ade5366_52be_4c8f_b9e2_1088b04caa90.slice/crio-9cd453dfb0e382319e49ee39393f6c839d161b5bdb8fd5d02e254256d7567905 WatchSource:0}: Error finding container 9cd453dfb0e382319e49ee39393f6c839d161b5bdb8fd5d02e254256d7567905: Status 404 returned error can't find the container with id 9cd453dfb0e382319e49ee39393f6c839d161b5bdb8fd5d02e254256d7567905 Feb 21 08:50:05 crc kubenswrapper[4820]: I0221 08:50:05.748100 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-57cnm" event={"ID":"4ade5366-52be-4c8f-b9e2-1088b04caa90","Type":"ContainerStarted","Data":"22bd70473682d2534a6bda081c017f55cf901c4665ca71c7e2f078b002e52460"} Feb 21 08:50:05 crc kubenswrapper[4820]: I0221 08:50:05.748470 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-57cnm" event={"ID":"4ade5366-52be-4c8f-b9e2-1088b04caa90","Type":"ContainerStarted","Data":"9cd453dfb0e382319e49ee39393f6c839d161b5bdb8fd5d02e254256d7567905"} Feb 21 08:50:05 crc kubenswrapper[4820]: I0221 08:50:05.771927 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-57cnm" podStartSLOduration=2.312584654 podStartE2EDuration="2.771883189s" podCreationTimestamp="2026-02-21 08:50:03 +0000 UTC" firstStartedPulling="2026-02-21 08:50:04.7431678 +0000 UTC m=+7379.776251998" lastFinishedPulling="2026-02-21 08:50:05.202466335 +0000 UTC m=+7380.235550533" observedRunningTime="2026-02-21 08:50:05.770556994 +0000 UTC m=+7380.803641202" watchObservedRunningTime="2026-02-21 08:50:05.771883189 +0000 UTC m=+7380.804967387" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.706397 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m9ngv"] Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.709210 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.721891 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m9ngv"] Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.752895 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-utilities\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.753624 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4km9\" (UniqueName: \"kubernetes.io/projected/5865e706-eb59-4999-b451-4c5001489062-kube-api-access-g4km9\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.753873 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-catalog-content\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.855039 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4km9\" (UniqueName: \"kubernetes.io/projected/5865e706-eb59-4999-b451-4c5001489062-kube-api-access-g4km9\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.855131 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-catalog-content\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.855283 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-utilities\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.855846 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-utilities\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.856088 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-catalog-content\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.878616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4km9\" (UniqueName: \"kubernetes.io/projected/5865e706-eb59-4999-b451-4c5001489062-kube-api-access-g4km9\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:11 crc kubenswrapper[4820]: I0221 08:50:11.032252 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:11 crc kubenswrapper[4820]: I0221 08:50:11.560937 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m9ngv"] Feb 21 08:50:11 crc kubenswrapper[4820]: I0221 08:50:11.834658 4820 generic.go:334] "Generic (PLEG): container finished" podID="5865e706-eb59-4999-b451-4c5001489062" containerID="d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d" exitCode=0 Feb 21 08:50:11 crc kubenswrapper[4820]: I0221 08:50:11.834813 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9ngv" event={"ID":"5865e706-eb59-4999-b451-4c5001489062","Type":"ContainerDied","Data":"d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d"} Feb 21 08:50:11 crc kubenswrapper[4820]: I0221 08:50:11.835049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9ngv" event={"ID":"5865e706-eb59-4999-b451-4c5001489062","Type":"ContainerStarted","Data":"f842890b59139105b6197f590c30468b53f9a314697680080deb4c7ee76c2722"} Feb 21 08:50:12 crc kubenswrapper[4820]: I0221 08:50:12.848518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9ngv" event={"ID":"5865e706-eb59-4999-b451-4c5001489062","Type":"ContainerStarted","Data":"8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2"} Feb 21 08:50:12 crc kubenswrapper[4820]: I0221 08:50:12.850642 4820 generic.go:334] "Generic (PLEG): container finished" podID="4ade5366-52be-4c8f-b9e2-1088b04caa90" containerID="22bd70473682d2534a6bda081c017f55cf901c4665ca71c7e2f078b002e52460" exitCode=0 Feb 21 08:50:12 crc kubenswrapper[4820]: I0221 08:50:12.850682 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-57cnm" event={"ID":"4ade5366-52be-4c8f-b9e2-1088b04caa90","Type":"ContainerDied","Data":"22bd70473682d2534a6bda081c017f55cf901c4665ca71c7e2f078b002e52460"} Feb 21 08:50:13 crc kubenswrapper[4820]: I0221 08:50:13.815798 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:50:13 crc kubenswrapper[4820]: I0221 08:50:13.815859 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.275911 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.439648 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw5xd\" (UniqueName: \"kubernetes.io/projected/4ade5366-52be-4c8f-b9e2-1088b04caa90-kube-api-access-xw5xd\") pod \"4ade5366-52be-4c8f-b9e2-1088b04caa90\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.439762 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-ssh-key-openstack-cell1\") pod \"4ade5366-52be-4c8f-b9e2-1088b04caa90\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.439902 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-inventory\") pod \"4ade5366-52be-4c8f-b9e2-1088b04caa90\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.446195 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ade5366-52be-4c8f-b9e2-1088b04caa90-kube-api-access-xw5xd" (OuterVolumeSpecName: "kube-api-access-xw5xd") pod "4ade5366-52be-4c8f-b9e2-1088b04caa90" (UID: "4ade5366-52be-4c8f-b9e2-1088b04caa90"). InnerVolumeSpecName "kube-api-access-xw5xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.475055 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "4ade5366-52be-4c8f-b9e2-1088b04caa90" (UID: "4ade5366-52be-4c8f-b9e2-1088b04caa90"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.486876 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-inventory" (OuterVolumeSpecName: "inventory") pod "4ade5366-52be-4c8f-b9e2-1088b04caa90" (UID: "4ade5366-52be-4c8f-b9e2-1088b04caa90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.541975 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.542011 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.542020 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw5xd\" (UniqueName: \"kubernetes.io/projected/4ade5366-52be-4c8f-b9e2-1088b04caa90-kube-api-access-xw5xd\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.869364 4820 generic.go:334] "Generic (PLEG): container finished" podID="5865e706-eb59-4999-b451-4c5001489062" containerID="8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2" exitCode=0 Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.869442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9ngv" event={"ID":"5865e706-eb59-4999-b451-4c5001489062","Type":"ContainerDied","Data":"8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2"} Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.872931 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-57cnm" event={"ID":"4ade5366-52be-4c8f-b9e2-1088b04caa90","Type":"ContainerDied","Data":"9cd453dfb0e382319e49ee39393f6c839d161b5bdb8fd5d02e254256d7567905"} Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.872968 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cd453dfb0e382319e49ee39393f6c839d161b5bdb8fd5d02e254256d7567905" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.872966 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.977289 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-42cjk"] Feb 21 08:50:14 crc kubenswrapper[4820]: E0221 08:50:14.977884 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ade5366-52be-4c8f-b9e2-1088b04caa90" containerName="run-os-openstack-openstack-cell1" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.977900 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ade5366-52be-4c8f-b9e2-1088b04caa90" containerName="run-os-openstack-openstack-cell1" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.978375 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ade5366-52be-4c8f-b9e2-1088b04caa90" containerName="run-os-openstack-openstack-cell1" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.979253 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.983199 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.983504 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.983693 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.983922 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.996425 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-42cjk"] Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.155660 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrh4p\" (UniqueName: \"kubernetes.io/projected/4449546f-cb82-4976-b53e-cad851a6369d-kube-api-access-mrh4p\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.155736 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-inventory\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.155757 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.257457 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrh4p\" (UniqueName: \"kubernetes.io/projected/4449546f-cb82-4976-b53e-cad851a6369d-kube-api-access-mrh4p\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.257541 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-inventory\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.257563 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.262893 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-inventory\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.264172 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.274733 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrh4p\" (UniqueName: \"kubernetes.io/projected/4449546f-cb82-4976-b53e-cad851a6369d-kube-api-access-mrh4p\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.311417 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.855089 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-42cjk"] Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.887305 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" event={"ID":"4449546f-cb82-4976-b53e-cad851a6369d","Type":"ContainerStarted","Data":"660cea90273488044b3e84b2bdf63f34b73113632b5713fa9be7b6b1c4f0dfaa"} Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.892177 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9ngv" event={"ID":"5865e706-eb59-4999-b451-4c5001489062","Type":"ContainerStarted","Data":"244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd"} Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.917375 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m9ngv" podStartSLOduration=2.491821685 podStartE2EDuration="5.917351123s" podCreationTimestamp="2026-02-21 08:50:10 +0000 UTC" firstStartedPulling="2026-02-21 08:50:11.839373734 +0000 UTC m=+7386.872457922" lastFinishedPulling="2026-02-21 08:50:15.264903162 +0000 UTC m=+7390.297987360" observedRunningTime="2026-02-21 08:50:15.907393452 +0000 UTC m=+7390.940477670" watchObservedRunningTime="2026-02-21 08:50:15.917351123 +0000 UTC m=+7390.950435321" Feb 21 08:50:16 crc kubenswrapper[4820]: I0221 08:50:16.901078 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" event={"ID":"4449546f-cb82-4976-b53e-cad851a6369d","Type":"ContainerStarted","Data":"48d8a8222a29a51075191c85ce26d089db15eaa1bb388d6665c186ce14164e1c"} Feb 21 08:50:21 crc kubenswrapper[4820]: I0221 08:50:21.033159 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:21 crc kubenswrapper[4820]: I0221 08:50:21.033746 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:21 crc kubenswrapper[4820]: I0221 08:50:21.099190 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:21 crc kubenswrapper[4820]: I0221 08:50:21.126195 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" podStartSLOduration=6.672270227 podStartE2EDuration="7.126174556s" podCreationTimestamp="2026-02-21 08:50:14 +0000 UTC" firstStartedPulling="2026-02-21 08:50:15.869593095 +0000 UTC m=+7390.902677293" lastFinishedPulling="2026-02-21 08:50:16.323497424 +0000 UTC m=+7391.356581622" observedRunningTime="2026-02-21 08:50:16.9312181 +0000 UTC m=+7391.964302298" watchObservedRunningTime="2026-02-21 08:50:21.126174556 +0000 UTC m=+7396.159258764" Feb 21 08:50:22 crc kubenswrapper[4820]: I0221 08:50:22.002885 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:22 crc kubenswrapper[4820]: I0221 08:50:22.067619 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m9ngv"] Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.748858 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4h88z"] Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.752027 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.762525 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h88z"] Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.853652 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-utilities\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.853710 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmm47\" (UniqueName: \"kubernetes.io/projected/0df50340-ab5d-4f64-a931-2f795141a7d3-kube-api-access-jmm47\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.853779 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-catalog-content\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.955421 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-utilities\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.955501 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmm47\" (UniqueName: \"kubernetes.io/projected/0df50340-ab5d-4f64-a931-2f795141a7d3-kube-api-access-jmm47\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.955563 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-catalog-content\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.956105 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-utilities\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.956216 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-catalog-content\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.970654 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m9ngv" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="registry-server" containerID="cri-o://244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd" gracePeriod=2 Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.978291 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmm47\" (UniqueName: \"kubernetes.io/projected/0df50340-ab5d-4f64-a931-2f795141a7d3-kube-api-access-jmm47\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.076695 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:24 crc kubenswrapper[4820]: E0221 08:50:24.239188 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5865e706_eb59_4999_b451_4c5001489062.slice/crio-244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5865e706_eb59_4999_b451_4c5001489062.slice/crio-conmon-244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd.scope\": RecentStats: unable to find data in memory cache]" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.481514 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.571764 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-utilities\") pod \"5865e706-eb59-4999-b451-4c5001489062\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.571828 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4km9\" (UniqueName: \"kubernetes.io/projected/5865e706-eb59-4999-b451-4c5001489062-kube-api-access-g4km9\") pod \"5865e706-eb59-4999-b451-4c5001489062\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.571997 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-catalog-content\") pod \"5865e706-eb59-4999-b451-4c5001489062\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.573383 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-utilities" (OuterVolumeSpecName: "utilities") pod "5865e706-eb59-4999-b451-4c5001489062" (UID: "5865e706-eb59-4999-b451-4c5001489062"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.578027 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5865e706-eb59-4999-b451-4c5001489062-kube-api-access-g4km9" (OuterVolumeSpecName: "kube-api-access-g4km9") pod "5865e706-eb59-4999-b451-4c5001489062" (UID: "5865e706-eb59-4999-b451-4c5001489062"). InnerVolumeSpecName "kube-api-access-g4km9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.610976 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h88z"] Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.629008 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5865e706-eb59-4999-b451-4c5001489062" (UID: "5865e706-eb59-4999-b451-4c5001489062"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.676197 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.676230 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4km9\" (UniqueName: \"kubernetes.io/projected/5865e706-eb59-4999-b451-4c5001489062-kube-api-access-g4km9\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.676262 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.989293 4820 generic.go:334] "Generic (PLEG): container finished" podID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerID="3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f" exitCode=0 Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.989388 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h88z" event={"ID":"0df50340-ab5d-4f64-a931-2f795141a7d3","Type":"ContainerDied","Data":"3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f"} Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.989787 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h88z" event={"ID":"0df50340-ab5d-4f64-a931-2f795141a7d3","Type":"ContainerStarted","Data":"396dcc2f3776efeb74c16675baa8e8700050f1b9c29a10cfc8a1c94667cc1207"} Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.995901 4820 generic.go:334] "Generic (PLEG): container finished" podID="5865e706-eb59-4999-b451-4c5001489062" containerID="244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd" exitCode=0 Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.995942 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.995954 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9ngv" event={"ID":"5865e706-eb59-4999-b451-4c5001489062","Type":"ContainerDied","Data":"244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd"} Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.995992 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9ngv" event={"ID":"5865e706-eb59-4999-b451-4c5001489062","Type":"ContainerDied","Data":"f842890b59139105b6197f590c30468b53f9a314697680080deb4c7ee76c2722"} Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.996013 4820 scope.go:117] "RemoveContainer" containerID="244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.045819 4820 scope.go:117] "RemoveContainer" containerID="8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.058134 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m9ngv"] Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.066615 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m9ngv"] Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.086291 4820 scope.go:117] "RemoveContainer" containerID="d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.115689 4820 scope.go:117] "RemoveContainer" containerID="244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd" Feb 21 08:50:25 crc kubenswrapper[4820]: E0221 08:50:25.116333 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd\": container with ID starting with 244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd not found: ID does not exist" containerID="244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.116360 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd"} err="failed to get container status \"244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd\": rpc error: code = NotFound desc = could not find container \"244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd\": container with ID starting with 244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd not found: ID does not exist" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.116388 4820 scope.go:117] "RemoveContainer" containerID="8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2" Feb 21 08:50:25 crc kubenswrapper[4820]: E0221 08:50:25.116712 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2\": container with ID starting with 8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2 not found: ID does not exist" containerID="8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.116741 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2"} err="failed to get container status \"8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2\": rpc error: code = NotFound desc = could not find container \"8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2\": container with ID starting with 8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2 not found: ID does not exist" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.116756 4820 scope.go:117] "RemoveContainer" containerID="d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d" Feb 21 08:50:25 crc kubenswrapper[4820]: E0221 08:50:25.117069 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d\": container with ID starting with d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d not found: ID does not exist" containerID="d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.117091 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d"} err="failed to get container status \"d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d\": rpc error: code = NotFound desc = could not find container \"d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d\": container with ID starting with d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d not found: ID does not exist" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.710964 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5865e706-eb59-4999-b451-4c5001489062" path="/var/lib/kubelet/pods/5865e706-eb59-4999-b451-4c5001489062/volumes" Feb 21 08:50:26 crc kubenswrapper[4820]: I0221 08:50:26.008330 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h88z" event={"ID":"0df50340-ab5d-4f64-a931-2f795141a7d3","Type":"ContainerStarted","Data":"4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757"} Feb 21 08:50:27 crc kubenswrapper[4820]: I0221 08:50:27.021083 4820 generic.go:334] "Generic (PLEG): container finished" podID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerID="4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757" exitCode=0 Feb 21 08:50:27 crc kubenswrapper[4820]: I0221 08:50:27.021157 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h88z" event={"ID":"0df50340-ab5d-4f64-a931-2f795141a7d3","Type":"ContainerDied","Data":"4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757"} Feb 21 08:50:28 crc kubenswrapper[4820]: I0221 08:50:28.032364 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h88z" event={"ID":"0df50340-ab5d-4f64-a931-2f795141a7d3","Type":"ContainerStarted","Data":"07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3"} Feb 21 08:50:28 crc kubenswrapper[4820]: I0221 08:50:28.061801 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4h88z" podStartSLOduration=2.441727177 podStartE2EDuration="5.061782758s" podCreationTimestamp="2026-02-21 08:50:23 +0000 UTC" firstStartedPulling="2026-02-21 08:50:24.99120205 +0000 UTC m=+7400.024286248" lastFinishedPulling="2026-02-21 08:50:27.611257631 +0000 UTC m=+7402.644341829" observedRunningTime="2026-02-21 08:50:28.05340044 +0000 UTC m=+7403.086484658" watchObservedRunningTime="2026-02-21 08:50:28.061782758 +0000 UTC m=+7403.094866956" Feb 21 08:50:33 crc kubenswrapper[4820]: I0221 08:50:33.089101 4820 generic.go:334] "Generic (PLEG): container finished" podID="4449546f-cb82-4976-b53e-cad851a6369d" containerID="48d8a8222a29a51075191c85ce26d089db15eaa1bb388d6665c186ce14164e1c" exitCode=0 Feb 21 08:50:33 crc kubenswrapper[4820]: I0221 08:50:33.089170 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" event={"ID":"4449546f-cb82-4976-b53e-cad851a6369d","Type":"ContainerDied","Data":"48d8a8222a29a51075191c85ce26d089db15eaa1bb388d6665c186ce14164e1c"} Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.078120 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.079515 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.187942 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.656406 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.757158 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-ssh-key-openstack-cell1\") pod \"4449546f-cb82-4976-b53e-cad851a6369d\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.757266 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-inventory\") pod \"4449546f-cb82-4976-b53e-cad851a6369d\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.757301 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrh4p\" (UniqueName: \"kubernetes.io/projected/4449546f-cb82-4976-b53e-cad851a6369d-kube-api-access-mrh4p\") pod \"4449546f-cb82-4976-b53e-cad851a6369d\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.764314 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4449546f-cb82-4976-b53e-cad851a6369d-kube-api-access-mrh4p" (OuterVolumeSpecName: "kube-api-access-mrh4p") pod "4449546f-cb82-4976-b53e-cad851a6369d" (UID: "4449546f-cb82-4976-b53e-cad851a6369d"). InnerVolumeSpecName "kube-api-access-mrh4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.815447 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "4449546f-cb82-4976-b53e-cad851a6369d" (UID: "4449546f-cb82-4976-b53e-cad851a6369d"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.815500 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-inventory" (OuterVolumeSpecName: "inventory") pod "4449546f-cb82-4976-b53e-cad851a6369d" (UID: "4449546f-cb82-4976-b53e-cad851a6369d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.860434 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.861261 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.861282 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrh4p\" (UniqueName: \"kubernetes.io/projected/4449546f-cb82-4976-b53e-cad851a6369d-kube-api-access-mrh4p\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.117409 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" event={"ID":"4449546f-cb82-4976-b53e-cad851a6369d","Type":"ContainerDied","Data":"660cea90273488044b3e84b2bdf63f34b73113632b5713fa9be7b6b1c4f0dfaa"} Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.117669 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="660cea90273488044b3e84b2bdf63f34b73113632b5713fa9be7b6b1c4f0dfaa" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.117426 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.204992 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.209838 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-pdbm6"] Feb 21 08:50:35 crc kubenswrapper[4820]: E0221 08:50:35.210215 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="registry-server" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.210378 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="registry-server" Feb 21 08:50:35 crc kubenswrapper[4820]: E0221 08:50:35.210404 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="extract-utilities" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.210411 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="extract-utilities" Feb 21 08:50:35 crc kubenswrapper[4820]: E0221 08:50:35.210423 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4449546f-cb82-4976-b53e-cad851a6369d" containerName="reboot-os-openstack-openstack-cell1" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.210429 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4449546f-cb82-4976-b53e-cad851a6369d" containerName="reboot-os-openstack-openstack-cell1" Feb 21 08:50:35 crc kubenswrapper[4820]: E0221 08:50:35.210476 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="extract-content" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.210483 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="extract-content" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.210704 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="registry-server" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.210714 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4449546f-cb82-4976-b53e-cad851a6369d" containerName="reboot-os-openstack-openstack-cell1" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.211376 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.214545 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.215988 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.216306 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-neutron-metadata-default-certs-0" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.217202 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-libvirt-default-certs-0" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.218272 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-ovn-default-certs-0" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.218612 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.218839 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-telemetry-default-certs-0" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.219140 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.240721 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-pdbm6"] Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.312143 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h88z"] Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.371892 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.371953 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-inventory\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.371979 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372027 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372050 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372097 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372120 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372145 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372162 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372204 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372222 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372267 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tqd7\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-kube-api-access-2tqd7\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372300 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372336 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474476 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474574 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tqd7\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-kube-api-access-2tqd7\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474661 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474709 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474748 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-inventory\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474776 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474828 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474856 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474883 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474915 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474946 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474980 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.475006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.479565 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.481214 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.481328 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.481654 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.482203 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.482634 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.482859 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.482901 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.484760 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.485012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.485595 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.485707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-inventory\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.487555 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.488738 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.507539 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tqd7\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-kube-api-access-2tqd7\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.530258 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.896939 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-pdbm6"] Feb 21 08:50:35 crc kubenswrapper[4820]: W0221 08:50:35.905027 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddf72439_0ca3_4cbc_8186_fe74744a71e4.slice/crio-ce4aac2d6f3af594aff9228359ea0551acc3be6b1bc752c9dacd87ca44257c45 WatchSource:0}: Error finding container ce4aac2d6f3af594aff9228359ea0551acc3be6b1bc752c9dacd87ca44257c45: Status 404 returned error can't find the container with id ce4aac2d6f3af594aff9228359ea0551acc3be6b1bc752c9dacd87ca44257c45 Feb 21 08:50:36 crc kubenswrapper[4820]: I0221 08:50:36.126401 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" event={"ID":"ddf72439-0ca3-4cbc-8186-fe74744a71e4","Type":"ContainerStarted","Data":"ce4aac2d6f3af594aff9228359ea0551acc3be6b1bc752c9dacd87ca44257c45"} Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.143953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" event={"ID":"ddf72439-0ca3-4cbc-8186-fe74744a71e4","Type":"ContainerStarted","Data":"80da0a677fa473c619a4ef201c03d71a60e78328965d495f18ae4e687e4aea94"} Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.144607 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4h88z" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="registry-server" containerID="cri-o://07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3" gracePeriod=2 Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.201634 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" podStartSLOduration=1.545287724 podStartE2EDuration="2.201614829s" podCreationTimestamp="2026-02-21 08:50:35 +0000 UTC" firstStartedPulling="2026-02-21 08:50:35.907446549 +0000 UTC m=+7410.940530747" lastFinishedPulling="2026-02-21 08:50:36.563773654 +0000 UTC m=+7411.596857852" observedRunningTime="2026-02-21 08:50:37.184044301 +0000 UTC m=+7412.217128529" watchObservedRunningTime="2026-02-21 08:50:37.201614829 +0000 UTC m=+7412.234699027" Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.578927 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.719756 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmm47\" (UniqueName: \"kubernetes.io/projected/0df50340-ab5d-4f64-a931-2f795141a7d3-kube-api-access-jmm47\") pod \"0df50340-ab5d-4f64-a931-2f795141a7d3\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.719948 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-catalog-content\") pod \"0df50340-ab5d-4f64-a931-2f795141a7d3\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.720140 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-utilities\") pod \"0df50340-ab5d-4f64-a931-2f795141a7d3\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.720884 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-utilities" (OuterVolumeSpecName: "utilities") pod "0df50340-ab5d-4f64-a931-2f795141a7d3" (UID: "0df50340-ab5d-4f64-a931-2f795141a7d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.726154 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df50340-ab5d-4f64-a931-2f795141a7d3-kube-api-access-jmm47" (OuterVolumeSpecName: "kube-api-access-jmm47") pod "0df50340-ab5d-4f64-a931-2f795141a7d3" (UID: "0df50340-ab5d-4f64-a931-2f795141a7d3"). InnerVolumeSpecName "kube-api-access-jmm47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.747861 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0df50340-ab5d-4f64-a931-2f795141a7d3" (UID: "0df50340-ab5d-4f64-a931-2f795141a7d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.822311 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.822677 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.822688 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmm47\" (UniqueName: \"kubernetes.io/projected/0df50340-ab5d-4f64-a931-2f795141a7d3-kube-api-access-jmm47\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.169902 4820 generic.go:334] "Generic (PLEG): container finished" podID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerID="07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3" exitCode=0 Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.169963 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h88z" event={"ID":"0df50340-ab5d-4f64-a931-2f795141a7d3","Type":"ContainerDied","Data":"07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3"} Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.170014 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h88z" event={"ID":"0df50340-ab5d-4f64-a931-2f795141a7d3","Type":"ContainerDied","Data":"396dcc2f3776efeb74c16675baa8e8700050f1b9c29a10cfc8a1c94667cc1207"} Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.170010 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.170059 4820 scope.go:117] "RemoveContainer" containerID="07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.197558 4820 scope.go:117] "RemoveContainer" containerID="4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.239844 4820 scope.go:117] "RemoveContainer" containerID="3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.241276 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h88z"] Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.251971 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h88z"] Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.273802 4820 scope.go:117] "RemoveContainer" containerID="07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3" Feb 21 08:50:38 crc kubenswrapper[4820]: E0221 08:50:38.274363 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3\": container with ID starting with 07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3 not found: ID does not exist" containerID="07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.274420 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3"} err="failed to get container status \"07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3\": rpc error: code = NotFound desc = could not find container \"07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3\": container with ID starting with 07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3 not found: ID does not exist" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.274450 4820 scope.go:117] "RemoveContainer" containerID="4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757" Feb 21 08:50:38 crc kubenswrapper[4820]: E0221 08:50:38.274927 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757\": container with ID starting with 4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757 not found: ID does not exist" containerID="4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.274950 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757"} err="failed to get container status \"4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757\": rpc error: code = NotFound desc = could not find container \"4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757\": container with ID starting with 4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757 not found: ID does not exist" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.274966 4820 scope.go:117] "RemoveContainer" containerID="3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f" Feb 21 08:50:38 crc kubenswrapper[4820]: E0221 08:50:38.275285 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f\": container with ID starting with 3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f not found: ID does not exist" containerID="3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.275328 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f"} err="failed to get container status \"3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f\": rpc error: code = NotFound desc = could not find container \"3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f\": container with ID starting with 3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f not found: ID does not exist" Feb 21 08:50:39 crc kubenswrapper[4820]: I0221 08:50:39.708397 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" path="/var/lib/kubelet/pods/0df50340-ab5d-4f64-a931-2f795141a7d3/volumes" Feb 21 08:50:43 crc kubenswrapper[4820]: I0221 08:50:43.815759 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:50:43 crc kubenswrapper[4820]: I0221 08:50:43.816166 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:51:11 crc kubenswrapper[4820]: I0221 08:51:11.527103 4820 generic.go:334] "Generic (PLEG): container finished" podID="ddf72439-0ca3-4cbc-8186-fe74744a71e4" containerID="80da0a677fa473c619a4ef201c03d71a60e78328965d495f18ae4e687e4aea94" exitCode=0 Feb 21 08:51:11 crc kubenswrapper[4820]: I0221 08:51:11.527163 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" event={"ID":"ddf72439-0ca3-4cbc-8186-fe74744a71e4","Type":"ContainerDied","Data":"80da0a677fa473c619a4ef201c03d71a60e78328965d495f18ae4e687e4aea94"} Feb 21 08:51:12 crc kubenswrapper[4820]: I0221 08:51:12.976101 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.077888 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-nova-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.077960 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ssh-key-openstack-cell1\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.078004 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-libvirt-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.078044 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-libvirt-default-certs-0\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.078075 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-telemetry-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.078129 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ovn-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.078168 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-ovn-default-certs-0\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.078928 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-neutron-metadata-default-certs-0\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.079274 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-telemetry-default-certs-0\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.079311 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-sriov-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.079339 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tqd7\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-kube-api-access-2tqd7\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.079376 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-bootstrap-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.079404 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-inventory\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.079541 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-dhcp-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.079615 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-metadata-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.085084 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.085909 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-neutron-metadata-default-certs-0") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "openstack-cell1-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.085964 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-ovn-default-certs-0") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "openstack-cell1-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.086159 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.086328 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-telemetry-default-certs-0") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "openstack-cell1-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.086352 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.086495 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-libvirt-default-certs-0") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "openstack-cell1-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.087728 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.088192 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.088445 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-kube-api-access-2tqd7" (OuterVolumeSpecName: "kube-api-access-2tqd7") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "kube-api-access-2tqd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.088526 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.089931 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.090340 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.114450 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-inventory" (OuterVolumeSpecName: "inventory") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.123352 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181852 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tqd7\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-kube-api-access-2tqd7\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181884 4820 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181897 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181906 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181917 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181927 4820 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181936 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181945 4820 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181954 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181964 4820 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181973 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181981 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181989 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.182000 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.182009 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.550258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" event={"ID":"ddf72439-0ca3-4cbc-8186-fe74744a71e4","Type":"ContainerDied","Data":"ce4aac2d6f3af594aff9228359ea0551acc3be6b1bc752c9dacd87ca44257c45"} Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.550297 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce4aac2d6f3af594aff9228359ea0551acc3be6b1bc752c9dacd87ca44257c45" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.550352 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.707793 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-hxv8b"] Feb 21 08:51:13 crc kubenswrapper[4820]: E0221 08:51:13.708156 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf72439-0ca3-4cbc-8186-fe74744a71e4" containerName="install-certs-openstack-openstack-cell1" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.708176 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf72439-0ca3-4cbc-8186-fe74744a71e4" containerName="install-certs-openstack-openstack-cell1" Feb 21 08:51:13 crc kubenswrapper[4820]: E0221 08:51:13.708201 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="extract-utilities" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.708211 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="extract-utilities" Feb 21 08:51:13 crc kubenswrapper[4820]: E0221 08:51:13.708219 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="extract-content" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.708224 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="extract-content" Feb 21 08:51:13 crc kubenswrapper[4820]: E0221 08:51:13.708258 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="registry-server" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.708265 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="registry-server" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.708523 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="registry-server" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.708560 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddf72439-0ca3-4cbc-8186-fe74744a71e4" containerName="install-certs-openstack-openstack-cell1" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.715610 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.721593 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.721905 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.722050 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.724156 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.725677 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.729695 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-hxv8b"] Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.818587 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.818632 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.818667 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.819348 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.819402 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" gracePeriod=600 Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.898705 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.899182 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-inventory\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.900093 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.900282 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.900823 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65pv7\" (UniqueName: \"kubernetes.io/projected/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-kube-api-access-65pv7\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:13 crc kubenswrapper[4820]: E0221 08:51:13.948834 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.002692 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.002809 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65pv7\" (UniqueName: \"kubernetes.io/projected/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-kube-api-access-65pv7\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.002844 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.002888 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-inventory\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.002923 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.004637 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.007601 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.007774 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-inventory\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.011740 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.021311 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65pv7\" (UniqueName: \"kubernetes.io/projected/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-kube-api-access-65pv7\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.057615 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.563914 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-hxv8b"] Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.564357 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5"} Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.564272 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" exitCode=0 Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.564408 4820 scope.go:117] "RemoveContainer" containerID="1e52876a5d65e4dbfbc3bbb405ed2e1fc047a888cbb3cf03140368de5b7b9380" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.565086 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:51:14 crc kubenswrapper[4820]: E0221 08:51:14.565496 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:51:15 crc kubenswrapper[4820]: I0221 08:51:15.577376 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" event={"ID":"7b3e6252-4e79-4ce6-87f1-8b0e8c885536","Type":"ContainerStarted","Data":"5e2d6c6a6a56b36d47f6d37b7a5e5d4e20ae4331c4dd41f9a9759c9682272d5c"} Feb 21 08:51:15 crc kubenswrapper[4820]: I0221 08:51:15.577921 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" event={"ID":"7b3e6252-4e79-4ce6-87f1-8b0e8c885536","Type":"ContainerStarted","Data":"52286443a8a7e11d2180d2d7e026dd626ab9f7359a90fed94007042d0b395ec4"} Feb 21 08:51:15 crc kubenswrapper[4820]: I0221 08:51:15.603500 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" podStartSLOduration=2.061790109 podStartE2EDuration="2.603477892s" podCreationTimestamp="2026-02-21 08:51:13 +0000 UTC" firstStartedPulling="2026-02-21 08:51:14.568523732 +0000 UTC m=+7449.601607950" lastFinishedPulling="2026-02-21 08:51:15.110211535 +0000 UTC m=+7450.143295733" observedRunningTime="2026-02-21 08:51:15.598205489 +0000 UTC m=+7450.631289687" watchObservedRunningTime="2026-02-21 08:51:15.603477892 +0000 UTC m=+7450.636562090" Feb 21 08:51:29 crc kubenswrapper[4820]: I0221 08:51:29.698146 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:51:29 crc kubenswrapper[4820]: E0221 08:51:29.699531 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:51:41 crc kubenswrapper[4820]: I0221 08:51:41.696824 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:51:41 crc kubenswrapper[4820]: E0221 08:51:41.697563 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:51:52 crc kubenswrapper[4820]: I0221 08:51:52.697455 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:51:52 crc kubenswrapper[4820]: E0221 08:51:52.698830 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:52:05 crc kubenswrapper[4820]: I0221 08:52:05.705372 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:52:05 crc kubenswrapper[4820]: E0221 08:52:05.706346 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:52:14 crc kubenswrapper[4820]: I0221 08:52:14.125016 4820 generic.go:334] "Generic (PLEG): container finished" podID="7b3e6252-4e79-4ce6-87f1-8b0e8c885536" containerID="5e2d6c6a6a56b36d47f6d37b7a5e5d4e20ae4331c4dd41f9a9759c9682272d5c" exitCode=0 Feb 21 08:52:14 crc kubenswrapper[4820]: I0221 08:52:14.125100 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" event={"ID":"7b3e6252-4e79-4ce6-87f1-8b0e8c885536","Type":"ContainerDied","Data":"5e2d6c6a6a56b36d47f6d37b7a5e5d4e20ae4331c4dd41f9a9759c9682272d5c"} Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.596946 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.709948 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-inventory\") pod \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.710100 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ssh-key-openstack-cell1\") pod \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.710203 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovn-combined-ca-bundle\") pod \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.710387 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65pv7\" (UniqueName: \"kubernetes.io/projected/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-kube-api-access-65pv7\") pod \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.710561 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovncontroller-config-0\") pod \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.717608 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7b3e6252-4e79-4ce6-87f1-8b0e8c885536" (UID: "7b3e6252-4e79-4ce6-87f1-8b0e8c885536"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.717977 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-kube-api-access-65pv7" (OuterVolumeSpecName: "kube-api-access-65pv7") pod "7b3e6252-4e79-4ce6-87f1-8b0e8c885536" (UID: "7b3e6252-4e79-4ce6-87f1-8b0e8c885536"). InnerVolumeSpecName "kube-api-access-65pv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.737585 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "7b3e6252-4e79-4ce6-87f1-8b0e8c885536" (UID: "7b3e6252-4e79-4ce6-87f1-8b0e8c885536"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.738634 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "7b3e6252-4e79-4ce6-87f1-8b0e8c885536" (UID: "7b3e6252-4e79-4ce6-87f1-8b0e8c885536"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.739320 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-inventory" (OuterVolumeSpecName: "inventory") pod "7b3e6252-4e79-4ce6-87f1-8b0e8c885536" (UID: "7b3e6252-4e79-4ce6-87f1-8b0e8c885536"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.812864 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.812899 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.812907 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65pv7\" (UniqueName: \"kubernetes.io/projected/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-kube-api-access-65pv7\") on node \"crc\" DevicePath \"\"" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.812916 4820 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.812927 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.143576 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" event={"ID":"7b3e6252-4e79-4ce6-87f1-8b0e8c885536","Type":"ContainerDied","Data":"52286443a8a7e11d2180d2d7e026dd626ab9f7359a90fed94007042d0b395ec4"} Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.143621 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52286443a8a7e11d2180d2d7e026dd626ab9f7359a90fed94007042d0b395ec4" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.143651 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.249118 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-49ck6"] Feb 21 08:52:16 crc kubenswrapper[4820]: E0221 08:52:16.249588 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3e6252-4e79-4ce6-87f1-8b0e8c885536" containerName="ovn-openstack-openstack-cell1" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.249607 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3e6252-4e79-4ce6-87f1-8b0e8c885536" containerName="ovn-openstack-openstack-cell1" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.249809 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3e6252-4e79-4ce6-87f1-8b0e8c885536" containerName="ovn-openstack-openstack-cell1" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.250575 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.254857 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.254899 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.254922 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.255188 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.255196 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.255227 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.265165 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-49ck6"] Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.323114 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58hfw\" (UniqueName: \"kubernetes.io/projected/915c12d6-5a69-4e4b-a001-b9e865d4377b-kube-api-access-58hfw\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.323440 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.323563 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.323671 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.323799 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.323936 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.425383 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.425995 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.426102 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.426188 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58hfw\" (UniqueName: \"kubernetes.io/projected/915c12d6-5a69-4e4b-a001-b9e865d4377b-kube-api-access-58hfw\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.426337 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.426477 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.429361 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.429452 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.429557 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.429611 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.431398 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.443165 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58hfw\" (UniqueName: \"kubernetes.io/projected/915c12d6-5a69-4e4b-a001-b9e865d4377b-kube-api-access-58hfw\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.574335 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:17 crc kubenswrapper[4820]: I0221 08:52:17.115181 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-49ck6"] Feb 21 08:52:17 crc kubenswrapper[4820]: I0221 08:52:17.155515 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" event={"ID":"915c12d6-5a69-4e4b-a001-b9e865d4377b","Type":"ContainerStarted","Data":"cc2514c562a44f0a7fd5f11484927e8d4189244e682650e405bfba22a08315da"} Feb 21 08:52:17 crc kubenswrapper[4820]: I0221 08:52:17.701825 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:52:17 crc kubenswrapper[4820]: E0221 08:52:17.702366 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:52:18 crc kubenswrapper[4820]: I0221 08:52:18.165632 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" event={"ID":"915c12d6-5a69-4e4b-a001-b9e865d4377b","Type":"ContainerStarted","Data":"88e57bcc025792d8e38fd0ad998e6fc47ccd2f16a36f693b956b897bdd02ed1d"} Feb 21 08:52:18 crc kubenswrapper[4820]: I0221 08:52:18.185101 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" podStartSLOduration=1.726083605 podStartE2EDuration="2.185085192s" podCreationTimestamp="2026-02-21 08:52:16 +0000 UTC" firstStartedPulling="2026-02-21 08:52:17.114107724 +0000 UTC m=+7512.147191922" lastFinishedPulling="2026-02-21 08:52:17.573109311 +0000 UTC m=+7512.606193509" observedRunningTime="2026-02-21 08:52:18.18021675 +0000 UTC m=+7513.213300948" watchObservedRunningTime="2026-02-21 08:52:18.185085192 +0000 UTC m=+7513.218169390" Feb 21 08:52:30 crc kubenswrapper[4820]: I0221 08:52:30.697388 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:52:30 crc kubenswrapper[4820]: E0221 08:52:30.698145 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:52:44 crc kubenswrapper[4820]: I0221 08:52:44.697785 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:52:44 crc kubenswrapper[4820]: E0221 08:52:44.698802 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:52:59 crc kubenswrapper[4820]: I0221 08:52:59.697003 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:52:59 crc kubenswrapper[4820]: E0221 08:52:59.697789 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:53:06 crc kubenswrapper[4820]: I0221 08:53:06.638231 4820 generic.go:334] "Generic (PLEG): container finished" podID="915c12d6-5a69-4e4b-a001-b9e865d4377b" containerID="88e57bcc025792d8e38fd0ad998e6fc47ccd2f16a36f693b956b897bdd02ed1d" exitCode=0 Feb 21 08:53:06 crc kubenswrapper[4820]: I0221 08:53:06.638316 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" event={"ID":"915c12d6-5a69-4e4b-a001-b9e865d4377b","Type":"ContainerDied","Data":"88e57bcc025792d8e38fd0ad998e6fc47ccd2f16a36f693b956b897bdd02ed1d"} Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.087611 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.212819 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-metadata-combined-ca-bundle\") pod \"915c12d6-5a69-4e4b-a001-b9e865d4377b\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.212936 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58hfw\" (UniqueName: \"kubernetes.io/projected/915c12d6-5a69-4e4b-a001-b9e865d4377b-kube-api-access-58hfw\") pod \"915c12d6-5a69-4e4b-a001-b9e865d4377b\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.212983 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-ssh-key-openstack-cell1\") pod \"915c12d6-5a69-4e4b-a001-b9e865d4377b\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.213124 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"915c12d6-5a69-4e4b-a001-b9e865d4377b\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.213208 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-inventory\") pod \"915c12d6-5a69-4e4b-a001-b9e865d4377b\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.213234 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-nova-metadata-neutron-config-0\") pod \"915c12d6-5a69-4e4b-a001-b9e865d4377b\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.226880 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "915c12d6-5a69-4e4b-a001-b9e865d4377b" (UID: "915c12d6-5a69-4e4b-a001-b9e865d4377b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.226892 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/915c12d6-5a69-4e4b-a001-b9e865d4377b-kube-api-access-58hfw" (OuterVolumeSpecName: "kube-api-access-58hfw") pod "915c12d6-5a69-4e4b-a001-b9e865d4377b" (UID: "915c12d6-5a69-4e4b-a001-b9e865d4377b"). InnerVolumeSpecName "kube-api-access-58hfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.240026 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "915c12d6-5a69-4e4b-a001-b9e865d4377b" (UID: "915c12d6-5a69-4e4b-a001-b9e865d4377b"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.241540 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "915c12d6-5a69-4e4b-a001-b9e865d4377b" (UID: "915c12d6-5a69-4e4b-a001-b9e865d4377b"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.242414 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "915c12d6-5a69-4e4b-a001-b9e865d4377b" (UID: "915c12d6-5a69-4e4b-a001-b9e865d4377b"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.250034 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-inventory" (OuterVolumeSpecName: "inventory") pod "915c12d6-5a69-4e4b-a001-b9e865d4377b" (UID: "915c12d6-5a69-4e4b-a001-b9e865d4377b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.316404 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.316440 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.316452 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.316463 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58hfw\" (UniqueName: \"kubernetes.io/projected/915c12d6-5a69-4e4b-a001-b9e865d4377b-kube-api-access-58hfw\") on node \"crc\" DevicePath \"\"" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.316475 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.316486 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.656068 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" event={"ID":"915c12d6-5a69-4e4b-a001-b9e865d4377b","Type":"ContainerDied","Data":"cc2514c562a44f0a7fd5f11484927e8d4189244e682650e405bfba22a08315da"} Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.656441 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc2514c562a44f0a7fd5f11484927e8d4189244e682650e405bfba22a08315da" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.656143 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.762662 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-vxt45"] Feb 21 08:53:08 crc kubenswrapper[4820]: E0221 08:53:08.763150 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="915c12d6-5a69-4e4b-a001-b9e865d4377b" containerName="neutron-metadata-openstack-openstack-cell1" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.763175 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="915c12d6-5a69-4e4b-a001-b9e865d4377b" containerName="neutron-metadata-openstack-openstack-cell1" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.763406 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="915c12d6-5a69-4e4b-a001-b9e865d4377b" containerName="neutron-metadata-openstack-openstack-cell1" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.764178 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.767016 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.767352 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.767589 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.768360 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.771013 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.776424 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-vxt45"] Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.928744 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-inventory\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.928888 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.928912 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.928967 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.928992 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2qq9\" (UniqueName: \"kubernetes.io/projected/d646e04b-4083-4b58-a73f-47c72ba78dcc-kube-api-access-j2qq9\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.031754 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.031820 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.031863 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.031911 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2qq9\" (UniqueName: \"kubernetes.io/projected/d646e04b-4083-4b58-a73f-47c72ba78dcc-kube-api-access-j2qq9\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.032099 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-inventory\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.035440 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.035435 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.036372 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.036788 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-inventory\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.052337 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2qq9\" (UniqueName: \"kubernetes.io/projected/d646e04b-4083-4b58-a73f-47c72ba78dcc-kube-api-access-j2qq9\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.091317 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.618057 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-vxt45"] Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.666606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" event={"ID":"d646e04b-4083-4b58-a73f-47c72ba78dcc","Type":"ContainerStarted","Data":"d8e9644c8bef8a192c46b655a73b8f20241d69342d79e5fe4036cd3cf7fab8a3"} Feb 21 08:53:10 crc kubenswrapper[4820]: I0221 08:53:10.678134 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" event={"ID":"d646e04b-4083-4b58-a73f-47c72ba78dcc","Type":"ContainerStarted","Data":"ddfbddd03a6e4efbd1ac4be5b0fef2e56a4e1b828e29effffb53ba3c5a926ea3"} Feb 21 08:53:10 crc kubenswrapper[4820]: I0221 08:53:10.699070 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" podStartSLOduration=2.17756973 podStartE2EDuration="2.699049764s" podCreationTimestamp="2026-02-21 08:53:08 +0000 UTC" firstStartedPulling="2026-02-21 08:53:09.6300897 +0000 UTC m=+7564.663173898" lastFinishedPulling="2026-02-21 08:53:10.151569714 +0000 UTC m=+7565.184653932" observedRunningTime="2026-02-21 08:53:10.69483393 +0000 UTC m=+7565.727918138" watchObservedRunningTime="2026-02-21 08:53:10.699049764 +0000 UTC m=+7565.732133962" Feb 21 08:53:12 crc kubenswrapper[4820]: I0221 08:53:12.696727 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:53:12 crc kubenswrapper[4820]: E0221 08:53:12.697579 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:53:27 crc kubenswrapper[4820]: I0221 08:53:27.697076 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:53:27 crc kubenswrapper[4820]: E0221 08:53:27.697866 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:53:40 crc kubenswrapper[4820]: I0221 08:53:40.697714 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:53:40 crc kubenswrapper[4820]: E0221 08:53:40.698983 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:53:55 crc kubenswrapper[4820]: I0221 08:53:55.706086 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:53:55 crc kubenswrapper[4820]: E0221 08:53:55.706924 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:54:06 crc kubenswrapper[4820]: I0221 08:54:06.696980 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:54:06 crc kubenswrapper[4820]: E0221 08:54:06.698120 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:54:19 crc kubenswrapper[4820]: I0221 08:54:19.696855 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:54:19 crc kubenswrapper[4820]: E0221 08:54:19.697557 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:54:31 crc kubenswrapper[4820]: I0221 08:54:31.697662 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:54:31 crc kubenswrapper[4820]: E0221 08:54:31.705408 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:54:46 crc kubenswrapper[4820]: I0221 08:54:46.697402 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:54:46 crc kubenswrapper[4820]: E0221 08:54:46.698366 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:54:58 crc kubenswrapper[4820]: I0221 08:54:58.697539 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:54:58 crc kubenswrapper[4820]: E0221 08:54:58.698350 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:55:13 crc kubenswrapper[4820]: I0221 08:55:13.696934 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:55:13 crc kubenswrapper[4820]: E0221 08:55:13.697732 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:55:28 crc kubenswrapper[4820]: I0221 08:55:28.697216 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:55:28 crc kubenswrapper[4820]: E0221 08:55:28.698069 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:55:39 crc kubenswrapper[4820]: I0221 08:55:39.696965 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:55:39 crc kubenswrapper[4820]: E0221 08:55:39.697711 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:55:52 crc kubenswrapper[4820]: I0221 08:55:52.696208 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:55:52 crc kubenswrapper[4820]: E0221 08:55:52.697051 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:56:05 crc kubenswrapper[4820]: I0221 08:56:05.708173 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:56:05 crc kubenswrapper[4820]: E0221 08:56:05.709272 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:56:20 crc kubenswrapper[4820]: I0221 08:56:20.696763 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:56:21 crc kubenswrapper[4820]: I0221 08:56:21.554806 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"fab82e286429ea9948c3cdafd64bd6f5f6d2085b288f12fbb11541642982e3be"} Feb 21 08:57:28 crc kubenswrapper[4820]: I0221 08:57:28.781070 4820 generic.go:334] "Generic (PLEG): container finished" podID="d646e04b-4083-4b58-a73f-47c72ba78dcc" containerID="ddfbddd03a6e4efbd1ac4be5b0fef2e56a4e1b828e29effffb53ba3c5a926ea3" exitCode=0 Feb 21 08:57:28 crc kubenswrapper[4820]: I0221 08:57:28.781147 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" event={"ID":"d646e04b-4083-4b58-a73f-47c72ba78dcc","Type":"ContainerDied","Data":"ddfbddd03a6e4efbd1ac4be5b0fef2e56a4e1b828e29effffb53ba3c5a926ea3"} Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.220455 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.371206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-inventory\") pod \"d646e04b-4083-4b58-a73f-47c72ba78dcc\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.371337 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-ssh-key-openstack-cell1\") pod \"d646e04b-4083-4b58-a73f-47c72ba78dcc\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.371415 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-combined-ca-bundle\") pod \"d646e04b-4083-4b58-a73f-47c72ba78dcc\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.371648 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2qq9\" (UniqueName: \"kubernetes.io/projected/d646e04b-4083-4b58-a73f-47c72ba78dcc-kube-api-access-j2qq9\") pod \"d646e04b-4083-4b58-a73f-47c72ba78dcc\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.371676 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-secret-0\") pod \"d646e04b-4083-4b58-a73f-47c72ba78dcc\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.376693 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d646e04b-4083-4b58-a73f-47c72ba78dcc" (UID: "d646e04b-4083-4b58-a73f-47c72ba78dcc"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.376869 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d646e04b-4083-4b58-a73f-47c72ba78dcc-kube-api-access-j2qq9" (OuterVolumeSpecName: "kube-api-access-j2qq9") pod "d646e04b-4083-4b58-a73f-47c72ba78dcc" (UID: "d646e04b-4083-4b58-a73f-47c72ba78dcc"). InnerVolumeSpecName "kube-api-access-j2qq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.401004 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-inventory" (OuterVolumeSpecName: "inventory") pod "d646e04b-4083-4b58-a73f-47c72ba78dcc" (UID: "d646e04b-4083-4b58-a73f-47c72ba78dcc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.404298 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "d646e04b-4083-4b58-a73f-47c72ba78dcc" (UID: "d646e04b-4083-4b58-a73f-47c72ba78dcc"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.413151 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d646e04b-4083-4b58-a73f-47c72ba78dcc" (UID: "d646e04b-4083-4b58-a73f-47c72ba78dcc"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.473934 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2qq9\" (UniqueName: \"kubernetes.io/projected/d646e04b-4083-4b58-a73f-47c72ba78dcc-kube-api-access-j2qq9\") on node \"crc\" DevicePath \"\"" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.473972 4820 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.473983 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.473992 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.474002 4820 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.810369 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" event={"ID":"d646e04b-4083-4b58-a73f-47c72ba78dcc","Type":"ContainerDied","Data":"d8e9644c8bef8a192c46b655a73b8f20241d69342d79e5fe4036cd3cf7fab8a3"} Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.810645 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e9644c8bef8a192c46b655a73b8f20241d69342d79e5fe4036cd3cf7fab8a3" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.810511 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.927039 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-w4sqf"] Feb 21 08:57:30 crc kubenswrapper[4820]: E0221 08:57:30.927450 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d646e04b-4083-4b58-a73f-47c72ba78dcc" containerName="libvirt-openstack-openstack-cell1" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.927470 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d646e04b-4083-4b58-a73f-47c72ba78dcc" containerName="libvirt-openstack-openstack-cell1" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.927687 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d646e04b-4083-4b58-a73f-47c72ba78dcc" containerName="libvirt-openstack-openstack-cell1" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.928390 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.931544 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.931559 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.931689 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.931696 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.931744 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.931550 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.932178 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.944020 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-w4sqf"] Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.085499 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.085546 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.085579 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.085686 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-inventory\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.085708 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.085907 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.085955 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.086086 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.086155 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g56mq\" (UniqueName: \"kubernetes.io/projected/c653de2c-8672-42fb-81c0-4e66975a3b8f-kube-api-access-g56mq\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.086314 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.086344 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.188415 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.188483 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.188576 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.188740 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-inventory\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.188811 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.188967 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.189039 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.189162 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.189280 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g56mq\" (UniqueName: \"kubernetes.io/projected/c653de2c-8672-42fb-81c0-4e66975a3b8f-kube-api-access-g56mq\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.189412 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.189470 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.192047 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.192776 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.192961 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.192995 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.193341 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.193545 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.193778 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-inventory\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.195104 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.195484 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.196335 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.208159 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g56mq\" (UniqueName: \"kubernetes.io/projected/c653de2c-8672-42fb-81c0-4e66975a3b8f-kube-api-access-g56mq\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.248690 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.767217 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-w4sqf"] Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.768634 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.820674 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" event={"ID":"c653de2c-8672-42fb-81c0-4e66975a3b8f","Type":"ContainerStarted","Data":"d8d278d55094e3f450ccb801d156b382e74efb6682a5f9d8786fa32fe1361576"} Feb 21 08:57:32 crc kubenswrapper[4820]: I0221 08:57:32.837191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" event={"ID":"c653de2c-8672-42fb-81c0-4e66975a3b8f","Type":"ContainerStarted","Data":"befeb7b23e0d32e5f4bea933447efbbf552e3f18186c4da87edc04135ee4581f"} Feb 21 08:57:32 crc kubenswrapper[4820]: I0221 08:57:32.881801 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" podStartSLOduration=2.484563344 podStartE2EDuration="2.881781842s" podCreationTimestamp="2026-02-21 08:57:30 +0000 UTC" firstStartedPulling="2026-02-21 08:57:31.768407493 +0000 UTC m=+7826.801491691" lastFinishedPulling="2026-02-21 08:57:32.165625971 +0000 UTC m=+7827.198710189" observedRunningTime="2026-02-21 08:57:32.866289832 +0000 UTC m=+7827.899374040" watchObservedRunningTime="2026-02-21 08:57:32.881781842 +0000 UTC m=+7827.914866040" Feb 21 08:58:43 crc kubenswrapper[4820]: I0221 08:58:43.816315 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:58:43 crc kubenswrapper[4820]: I0221 08:58:43.816859 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:59:13 crc kubenswrapper[4820]: I0221 08:59:13.815915 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:59:13 crc kubenswrapper[4820]: I0221 08:59:13.816564 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:59:43 crc kubenswrapper[4820]: I0221 08:59:43.816391 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:59:43 crc kubenswrapper[4820]: I0221 08:59:43.817164 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:59:43 crc kubenswrapper[4820]: I0221 08:59:43.817321 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:59:43 crc kubenswrapper[4820]: I0221 08:59:43.818706 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fab82e286429ea9948c3cdafd64bd6f5f6d2085b288f12fbb11541642982e3be"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:59:43 crc kubenswrapper[4820]: I0221 08:59:43.818837 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://fab82e286429ea9948c3cdafd64bd6f5f6d2085b288f12fbb11541642982e3be" gracePeriod=600 Feb 21 08:59:44 crc kubenswrapper[4820]: I0221 08:59:44.434058 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="fab82e286429ea9948c3cdafd64bd6f5f6d2085b288f12fbb11541642982e3be" exitCode=0 Feb 21 08:59:44 crc kubenswrapper[4820]: I0221 08:59:44.434133 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"fab82e286429ea9948c3cdafd64bd6f5f6d2085b288f12fbb11541642982e3be"} Feb 21 08:59:44 crc kubenswrapper[4820]: I0221 08:59:44.434744 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018"} Feb 21 08:59:44 crc kubenswrapper[4820]: I0221 08:59:44.434771 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.159284 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7"] Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.161501 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.164774 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.165092 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.170569 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7"] Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.244326 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-config-volume\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.244493 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hw6j\" (UniqueName: \"kubernetes.io/projected/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-kube-api-access-7hw6j\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.244533 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-secret-volume\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.345771 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hw6j\" (UniqueName: \"kubernetes.io/projected/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-kube-api-access-7hw6j\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.345850 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-secret-volume\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.345914 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-config-volume\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.347005 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-config-volume\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.352775 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-secret-volume\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.363110 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hw6j\" (UniqueName: \"kubernetes.io/projected/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-kube-api-access-7hw6j\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.490569 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.939835 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7"] Feb 21 09:00:01 crc kubenswrapper[4820]: I0221 09:00:01.617527 4820 generic.go:334] "Generic (PLEG): container finished" podID="81a30ae4-a5a5-4206-a3aa-b932f49d51fc" containerID="7119a71f451213b74dba191bad0f1b026958a391d5a6a83fbc51ac9fc67a87c6" exitCode=0 Feb 21 09:00:01 crc kubenswrapper[4820]: I0221 09:00:01.617604 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" event={"ID":"81a30ae4-a5a5-4206-a3aa-b932f49d51fc","Type":"ContainerDied","Data":"7119a71f451213b74dba191bad0f1b026958a391d5a6a83fbc51ac9fc67a87c6"} Feb 21 09:00:01 crc kubenswrapper[4820]: I0221 09:00:01.617855 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" event={"ID":"81a30ae4-a5a5-4206-a3aa-b932f49d51fc","Type":"ContainerStarted","Data":"017701d2c927143847805e9922d2a7e1890220323e6e1cd4b137722cb9f30e35"} Feb 21 09:00:02 crc kubenswrapper[4820]: I0221 09:00:02.924568 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.000200 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hw6j\" (UniqueName: \"kubernetes.io/projected/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-kube-api-access-7hw6j\") pod \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.000398 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-secret-volume\") pod \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.000424 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-config-volume\") pod \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.001361 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "81a30ae4-a5a5-4206-a3aa-b932f49d51fc" (UID: "81a30ae4-a5a5-4206-a3aa-b932f49d51fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.005703 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "81a30ae4-a5a5-4206-a3aa-b932f49d51fc" (UID: "81a30ae4-a5a5-4206-a3aa-b932f49d51fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.005998 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-kube-api-access-7hw6j" (OuterVolumeSpecName: "kube-api-access-7hw6j") pod "81a30ae4-a5a5-4206-a3aa-b932f49d51fc" (UID: "81a30ae4-a5a5-4206-a3aa-b932f49d51fc"). InnerVolumeSpecName "kube-api-access-7hw6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.102895 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.103192 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.103205 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hw6j\" (UniqueName: \"kubernetes.io/projected/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-kube-api-access-7hw6j\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.636190 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" event={"ID":"81a30ae4-a5a5-4206-a3aa-b932f49d51fc","Type":"ContainerDied","Data":"017701d2c927143847805e9922d2a7e1890220323e6e1cd4b137722cb9f30e35"} Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.636251 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="017701d2c927143847805e9922d2a7e1890220323e6e1cd4b137722cb9f30e35" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.636258 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.996312 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p"] Feb 21 09:00:04 crc kubenswrapper[4820]: I0221 09:00:04.004515 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p"] Feb 21 09:00:04 crc kubenswrapper[4820]: I0221 09:00:04.952309 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jr97f"] Feb 21 09:00:04 crc kubenswrapper[4820]: E0221 09:00:04.953131 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a30ae4-a5a5-4206-a3aa-b932f49d51fc" containerName="collect-profiles" Feb 21 09:00:04 crc kubenswrapper[4820]: I0221 09:00:04.953154 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a30ae4-a5a5-4206-a3aa-b932f49d51fc" containerName="collect-profiles" Feb 21 09:00:04 crc kubenswrapper[4820]: I0221 09:00:04.953414 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="81a30ae4-a5a5-4206-a3aa-b932f49d51fc" containerName="collect-profiles" Feb 21 09:00:04 crc kubenswrapper[4820]: I0221 09:00:04.955261 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:04 crc kubenswrapper[4820]: I0221 09:00:04.964875 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jr97f"] Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.041099 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj8nq\" (UniqueName: \"kubernetes.io/projected/a632316b-bf37-4d7e-8a47-1a4d453390bf-kube-api-access-jj8nq\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.041226 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-utilities\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.041265 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-catalog-content\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.143049 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-utilities\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.143094 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-catalog-content\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.143205 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj8nq\" (UniqueName: \"kubernetes.io/projected/a632316b-bf37-4d7e-8a47-1a4d453390bf-kube-api-access-jj8nq\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.143688 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-utilities\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.143813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-catalog-content\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.165010 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj8nq\" (UniqueName: \"kubernetes.io/projected/a632316b-bf37-4d7e-8a47-1a4d453390bf-kube-api-access-jj8nq\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.283885 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.707444 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fce41e0-c5c8-4286-8a58-cd620c05f4fc" path="/var/lib/kubelet/pods/6fce41e0-c5c8-4286-8a58-cd620c05f4fc/volumes" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.748035 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jr97f"] Feb 21 09:00:05 crc kubenswrapper[4820]: W0221 09:00:05.755676 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda632316b_bf37_4d7e_8a47_1a4d453390bf.slice/crio-1ee2abfede87870a65bb8478873d46fe7e14503bbfccdc858a20bfbafc3e4f80 WatchSource:0}: Error finding container 1ee2abfede87870a65bb8478873d46fe7e14503bbfccdc858a20bfbafc3e4f80: Status 404 returned error can't find the container with id 1ee2abfede87870a65bb8478873d46fe7e14503bbfccdc858a20bfbafc3e4f80 Feb 21 09:00:06 crc kubenswrapper[4820]: I0221 09:00:06.666909 4820 generic.go:334] "Generic (PLEG): container finished" podID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerID="53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554" exitCode=0 Feb 21 09:00:06 crc kubenswrapper[4820]: I0221 09:00:06.667006 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr97f" event={"ID":"a632316b-bf37-4d7e-8a47-1a4d453390bf","Type":"ContainerDied","Data":"53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554"} Feb 21 09:00:06 crc kubenswrapper[4820]: I0221 09:00:06.667290 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr97f" event={"ID":"a632316b-bf37-4d7e-8a47-1a4d453390bf","Type":"ContainerStarted","Data":"1ee2abfede87870a65bb8478873d46fe7e14503bbfccdc858a20bfbafc3e4f80"} Feb 21 09:00:07 crc kubenswrapper[4820]: I0221 09:00:07.678880 4820 generic.go:334] "Generic (PLEG): container finished" podID="c653de2c-8672-42fb-81c0-4e66975a3b8f" containerID="befeb7b23e0d32e5f4bea933447efbbf552e3f18186c4da87edc04135ee4581f" exitCode=0 Feb 21 09:00:07 crc kubenswrapper[4820]: I0221 09:00:07.678960 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" event={"ID":"c653de2c-8672-42fb-81c0-4e66975a3b8f","Type":"ContainerDied","Data":"befeb7b23e0d32e5f4bea933447efbbf552e3f18186c4da87edc04135ee4581f"} Feb 21 09:00:07 crc kubenswrapper[4820]: I0221 09:00:07.681335 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr97f" event={"ID":"a632316b-bf37-4d7e-8a47-1a4d453390bf","Type":"ContainerStarted","Data":"b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711"} Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.102679 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240468 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-1\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240625 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-inventory\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240644 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-combined-ca-bundle\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240696 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-ssh-key-openstack-cell1\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240755 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-2\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240791 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-3\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240818 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cells-global-config-0\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240881 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g56mq\" (UniqueName: \"kubernetes.io/projected/c653de2c-8672-42fb-81c0-4e66975a3b8f-kube-api-access-g56mq\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240903 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-0\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240918 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-1\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240947 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-0\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.247635 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.255228 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c653de2c-8672-42fb-81c0-4e66975a3b8f-kube-api-access-g56mq" (OuterVolumeSpecName: "kube-api-access-g56mq") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "kube-api-access-g56mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.270117 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-inventory" (OuterVolumeSpecName: "inventory") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.271553 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.272129 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.279466 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.281746 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.285633 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.288428 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.291632 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.303652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344447 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g56mq\" (UniqueName: \"kubernetes.io/projected/c653de2c-8672-42fb-81c0-4e66975a3b8f-kube-api-access-g56mq\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344491 4820 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344505 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344519 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344532 4820 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344544 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344559 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344572 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344585 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344597 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344609 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.703109 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.717637 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" event={"ID":"c653de2c-8672-42fb-81c0-4e66975a3b8f","Type":"ContainerDied","Data":"d8d278d55094e3f450ccb801d156b382e74efb6682a5f9d8786fa32fe1361576"} Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.717700 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d278d55094e3f450ccb801d156b382e74efb6682a5f9d8786fa32fe1361576" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.795873 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-wpbzs"] Feb 21 09:00:09 crc kubenswrapper[4820]: E0221 09:00:09.796415 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c653de2c-8672-42fb-81c0-4e66975a3b8f" containerName="nova-cell1-openstack-openstack-cell1" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.796439 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c653de2c-8672-42fb-81c0-4e66975a3b8f" containerName="nova-cell1-openstack-openstack-cell1" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.796674 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c653de2c-8672-42fb-81c0-4e66975a3b8f" containerName="nova-cell1-openstack-openstack-cell1" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.797543 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.800003 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.800088 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.801901 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.802338 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.802562 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.806335 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-wpbzs"] Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.853778 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-inventory\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.853852 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.853940 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.854108 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.854162 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt2tm\" (UniqueName: \"kubernetes.io/projected/dab763aa-fd5e-41b2-96d8-f758ad76f779-kube-api-access-tt2tm\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.854398 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.854447 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.956682 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.957034 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt2tm\" (UniqueName: \"kubernetes.io/projected/dab763aa-fd5e-41b2-96d8-f758ad76f779-kube-api-access-tt2tm\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.957165 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.957281 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.957452 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-inventory\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.957568 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.957647 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.960770 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-inventory\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.961144 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.961147 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.962031 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.962843 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.963380 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.975781 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt2tm\" (UniqueName: \"kubernetes.io/projected/dab763aa-fd5e-41b2-96d8-f758ad76f779-kube-api-access-tt2tm\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:10 crc kubenswrapper[4820]: I0221 09:00:10.120794 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:10 crc kubenswrapper[4820]: W0221 09:00:10.671318 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddab763aa_fd5e_41b2_96d8_f758ad76f779.slice/crio-36c774a1836920a1c6a3632cabbb4951058702b6304881a9b065f60c5557314f WatchSource:0}: Error finding container 36c774a1836920a1c6a3632cabbb4951058702b6304881a9b065f60c5557314f: Status 404 returned error can't find the container with id 36c774a1836920a1c6a3632cabbb4951058702b6304881a9b065f60c5557314f Feb 21 09:00:10 crc kubenswrapper[4820]: I0221 09:00:10.674222 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-wpbzs"] Feb 21 09:00:10 crc kubenswrapper[4820]: I0221 09:00:10.713476 4820 generic.go:334] "Generic (PLEG): container finished" podID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerID="b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711" exitCode=0 Feb 21 09:00:10 crc kubenswrapper[4820]: I0221 09:00:10.713550 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr97f" event={"ID":"a632316b-bf37-4d7e-8a47-1a4d453390bf","Type":"ContainerDied","Data":"b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711"} Feb 21 09:00:10 crc kubenswrapper[4820]: I0221 09:00:10.715159 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" event={"ID":"dab763aa-fd5e-41b2-96d8-f758ad76f779","Type":"ContainerStarted","Data":"36c774a1836920a1c6a3632cabbb4951058702b6304881a9b065f60c5557314f"} Feb 21 09:00:12 crc kubenswrapper[4820]: I0221 09:00:12.733101 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" event={"ID":"dab763aa-fd5e-41b2-96d8-f758ad76f779","Type":"ContainerStarted","Data":"32ed0675cc1bb9abf6513200d402c172eeaa97600b28f6d1ad6567f4b0f54be1"} Feb 21 09:00:12 crc kubenswrapper[4820]: I0221 09:00:12.735360 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr97f" event={"ID":"a632316b-bf37-4d7e-8a47-1a4d453390bf","Type":"ContainerStarted","Data":"4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a"} Feb 21 09:00:12 crc kubenswrapper[4820]: I0221 09:00:12.781728 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" podStartSLOduration=2.732144103 podStartE2EDuration="3.78170094s" podCreationTimestamp="2026-02-21 09:00:09 +0000 UTC" firstStartedPulling="2026-02-21 09:00:10.674802056 +0000 UTC m=+7985.707886254" lastFinishedPulling="2026-02-21 09:00:11.724358893 +0000 UTC m=+7986.757443091" observedRunningTime="2026-02-21 09:00:12.760431372 +0000 UTC m=+7987.793515570" watchObservedRunningTime="2026-02-21 09:00:12.78170094 +0000 UTC m=+7987.814785148" Feb 21 09:00:12 crc kubenswrapper[4820]: I0221 09:00:12.795418 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jr97f" podStartSLOduration=3.6099838650000002 podStartE2EDuration="8.795401872s" podCreationTimestamp="2026-02-21 09:00:04 +0000 UTC" firstStartedPulling="2026-02-21 09:00:06.669892272 +0000 UTC m=+7981.702976470" lastFinishedPulling="2026-02-21 09:00:11.855310279 +0000 UTC m=+7986.888394477" observedRunningTime="2026-02-21 09:00:12.785720229 +0000 UTC m=+7987.818804437" watchObservedRunningTime="2026-02-21 09:00:12.795401872 +0000 UTC m=+7987.828486070" Feb 21 09:00:12 crc kubenswrapper[4820]: I0221 09:00:12.944153 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tt9xg"] Feb 21 09:00:12 crc kubenswrapper[4820]: I0221 09:00:12.947161 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:12 crc kubenswrapper[4820]: I0221 09:00:12.963519 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tt9xg"] Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.018008 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vljj8\" (UniqueName: \"kubernetes.io/projected/89494cdc-fddf-40b6-b3c2-31fd3d48810c-kube-api-access-vljj8\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.018064 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-utilities\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.018383 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-catalog-content\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.120758 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-utilities\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.120885 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-catalog-content\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.120975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vljj8\" (UniqueName: \"kubernetes.io/projected/89494cdc-fddf-40b6-b3c2-31fd3d48810c-kube-api-access-vljj8\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.121464 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-catalog-content\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.121521 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-utilities\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.139538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vljj8\" (UniqueName: \"kubernetes.io/projected/89494cdc-fddf-40b6-b3c2-31fd3d48810c-kube-api-access-vljj8\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.264096 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.831448 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tt9xg"] Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.735570 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5km2z"] Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.738847 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.764023 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5km2z"] Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.768181 4820 generic.go:334] "Generic (PLEG): container finished" podID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerID="629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97" exitCode=0 Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.768230 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xg" event={"ID":"89494cdc-fddf-40b6-b3c2-31fd3d48810c","Type":"ContainerDied","Data":"629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97"} Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.768283 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xg" event={"ID":"89494cdc-fddf-40b6-b3c2-31fd3d48810c","Type":"ContainerStarted","Data":"0e08f3f9df523ab78f42d321c8067aaecb631aa2e3579ca30e7e3b80406e5137"} Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.863512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz6n9\" (UniqueName: \"kubernetes.io/projected/61f3a2fe-753d-4282-a97e-bd85b3116def-kube-api-access-jz6n9\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.863583 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-utilities\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.863631 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-catalog-content\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.965637 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz6n9\" (UniqueName: \"kubernetes.io/projected/61f3a2fe-753d-4282-a97e-bd85b3116def-kube-api-access-jz6n9\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.965938 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-utilities\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.965984 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-catalog-content\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.966617 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-catalog-content\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.966875 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-utilities\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.986653 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz6n9\" (UniqueName: \"kubernetes.io/projected/61f3a2fe-753d-4282-a97e-bd85b3116def-kube-api-access-jz6n9\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:15 crc kubenswrapper[4820]: I0221 09:00:15.061175 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:15 crc kubenswrapper[4820]: I0221 09:00:15.286036 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:15 crc kubenswrapper[4820]: I0221 09:00:15.286074 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:15 crc kubenswrapper[4820]: I0221 09:00:15.614156 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5km2z"] Feb 21 09:00:15 crc kubenswrapper[4820]: W0221 09:00:15.615299 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61f3a2fe_753d_4282_a97e_bd85b3116def.slice/crio-4bc72efadaeb7e04a89e9f8ca46762f345b7c0fdc83dde64218845eff06d2867 WatchSource:0}: Error finding container 4bc72efadaeb7e04a89e9f8ca46762f345b7c0fdc83dde64218845eff06d2867: Status 404 returned error can't find the container with id 4bc72efadaeb7e04a89e9f8ca46762f345b7c0fdc83dde64218845eff06d2867 Feb 21 09:00:15 crc kubenswrapper[4820]: I0221 09:00:15.778619 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5km2z" event={"ID":"61f3a2fe-753d-4282-a97e-bd85b3116def","Type":"ContainerStarted","Data":"4bc72efadaeb7e04a89e9f8ca46762f345b7c0fdc83dde64218845eff06d2867"} Feb 21 09:00:16 crc kubenswrapper[4820]: I0221 09:00:16.352318 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jr97f" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="registry-server" probeResult="failure" output=< Feb 21 09:00:16 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:00:16 crc kubenswrapper[4820]: > Feb 21 09:00:16 crc kubenswrapper[4820]: I0221 09:00:16.796125 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xg" event={"ID":"89494cdc-fddf-40b6-b3c2-31fd3d48810c","Type":"ContainerStarted","Data":"1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd"} Feb 21 09:00:16 crc kubenswrapper[4820]: I0221 09:00:16.798691 4820 generic.go:334] "Generic (PLEG): container finished" podID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerID="72c9e4900e6ee50688f885f43a50c51001a88071dddf3cb2ab38cb306d156501" exitCode=0 Feb 21 09:00:16 crc kubenswrapper[4820]: I0221 09:00:16.798753 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5km2z" event={"ID":"61f3a2fe-753d-4282-a97e-bd85b3116def","Type":"ContainerDied","Data":"72c9e4900e6ee50688f885f43a50c51001a88071dddf3cb2ab38cb306d156501"} Feb 21 09:00:18 crc kubenswrapper[4820]: I0221 09:00:18.819256 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5km2z" event={"ID":"61f3a2fe-753d-4282-a97e-bd85b3116def","Type":"ContainerStarted","Data":"11153aa6e4379a1c93fb4c95a36fc0e8b0f3604fa2272fced65db179d0c212c6"} Feb 21 09:00:21 crc kubenswrapper[4820]: I0221 09:00:21.846871 4820 generic.go:334] "Generic (PLEG): container finished" podID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerID="11153aa6e4379a1c93fb4c95a36fc0e8b0f3604fa2272fced65db179d0c212c6" exitCode=0 Feb 21 09:00:21 crc kubenswrapper[4820]: I0221 09:00:21.846944 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5km2z" event={"ID":"61f3a2fe-753d-4282-a97e-bd85b3116def","Type":"ContainerDied","Data":"11153aa6e4379a1c93fb4c95a36fc0e8b0f3604fa2272fced65db179d0c212c6"} Feb 21 09:00:21 crc kubenswrapper[4820]: I0221 09:00:21.852581 4820 generic.go:334] "Generic (PLEG): container finished" podID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerID="1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd" exitCode=0 Feb 21 09:00:21 crc kubenswrapper[4820]: I0221 09:00:21.852626 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xg" event={"ID":"89494cdc-fddf-40b6-b3c2-31fd3d48810c","Type":"ContainerDied","Data":"1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd"} Feb 21 09:00:22 crc kubenswrapper[4820]: I0221 09:00:22.864321 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5km2z" event={"ID":"61f3a2fe-753d-4282-a97e-bd85b3116def","Type":"ContainerStarted","Data":"ca476df6658c0fbcfd9d0b7befc9653f90adbbe97d7c122482f8191801241a3e"} Feb 21 09:00:22 crc kubenswrapper[4820]: I0221 09:00:22.867925 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xg" event={"ID":"89494cdc-fddf-40b6-b3c2-31fd3d48810c","Type":"ContainerStarted","Data":"73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810"} Feb 21 09:00:22 crc kubenswrapper[4820]: I0221 09:00:22.892074 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5km2z" podStartSLOduration=3.40571536 podStartE2EDuration="8.89205453s" podCreationTimestamp="2026-02-21 09:00:14 +0000 UTC" firstStartedPulling="2026-02-21 09:00:16.800932964 +0000 UTC m=+7991.834017172" lastFinishedPulling="2026-02-21 09:00:22.287272144 +0000 UTC m=+7997.320356342" observedRunningTime="2026-02-21 09:00:22.883827327 +0000 UTC m=+7997.916911525" watchObservedRunningTime="2026-02-21 09:00:22.89205453 +0000 UTC m=+7997.925138728" Feb 21 09:00:22 crc kubenswrapper[4820]: I0221 09:00:22.906398 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tt9xg" podStartSLOduration=3.478237868 podStartE2EDuration="10.906381719s" podCreationTimestamp="2026-02-21 09:00:12 +0000 UTC" firstStartedPulling="2026-02-21 09:00:14.794170139 +0000 UTC m=+7989.827254337" lastFinishedPulling="2026-02-21 09:00:22.22231399 +0000 UTC m=+7997.255398188" observedRunningTime="2026-02-21 09:00:22.904921409 +0000 UTC m=+7997.938005607" watchObservedRunningTime="2026-02-21 09:00:22.906381719 +0000 UTC m=+7997.939465917" Feb 21 09:00:23 crc kubenswrapper[4820]: I0221 09:00:23.264257 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:23 crc kubenswrapper[4820]: I0221 09:00:23.264521 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:24 crc kubenswrapper[4820]: I0221 09:00:24.315201 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tt9xg" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="registry-server" probeResult="failure" output=< Feb 21 09:00:24 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:00:24 crc kubenswrapper[4820]: > Feb 21 09:00:25 crc kubenswrapper[4820]: I0221 09:00:25.062381 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:25 crc kubenswrapper[4820]: I0221 09:00:25.062439 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:25 crc kubenswrapper[4820]: I0221 09:00:25.106688 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:25 crc kubenswrapper[4820]: I0221 09:00:25.347598 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:25 crc kubenswrapper[4820]: I0221 09:00:25.411799 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:27 crc kubenswrapper[4820]: I0221 09:00:27.929838 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jr97f"] Feb 21 09:00:27 crc kubenswrapper[4820]: I0221 09:00:27.930092 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jr97f" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="registry-server" containerID="cri-o://4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a" gracePeriod=2 Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.361892 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.547686 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj8nq\" (UniqueName: \"kubernetes.io/projected/a632316b-bf37-4d7e-8a47-1a4d453390bf-kube-api-access-jj8nq\") pod \"a632316b-bf37-4d7e-8a47-1a4d453390bf\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.547854 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-utilities\") pod \"a632316b-bf37-4d7e-8a47-1a4d453390bf\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.547878 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-catalog-content\") pod \"a632316b-bf37-4d7e-8a47-1a4d453390bf\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.549718 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-utilities" (OuterVolumeSpecName: "utilities") pod "a632316b-bf37-4d7e-8a47-1a4d453390bf" (UID: "a632316b-bf37-4d7e-8a47-1a4d453390bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.553611 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a632316b-bf37-4d7e-8a47-1a4d453390bf-kube-api-access-jj8nq" (OuterVolumeSpecName: "kube-api-access-jj8nq") pod "a632316b-bf37-4d7e-8a47-1a4d453390bf" (UID: "a632316b-bf37-4d7e-8a47-1a4d453390bf"). InnerVolumeSpecName "kube-api-access-jj8nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.650855 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.650898 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj8nq\" (UniqueName: \"kubernetes.io/projected/a632316b-bf37-4d7e-8a47-1a4d453390bf-kube-api-access-jj8nq\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.654587 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a632316b-bf37-4d7e-8a47-1a4d453390bf" (UID: "a632316b-bf37-4d7e-8a47-1a4d453390bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.752640 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.927785 4820 generic.go:334] "Generic (PLEG): container finished" podID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerID="4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a" exitCode=0 Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.927829 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr97f" event={"ID":"a632316b-bf37-4d7e-8a47-1a4d453390bf","Type":"ContainerDied","Data":"4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a"} Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.927855 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr97f" event={"ID":"a632316b-bf37-4d7e-8a47-1a4d453390bf","Type":"ContainerDied","Data":"1ee2abfede87870a65bb8478873d46fe7e14503bbfccdc858a20bfbafc3e4f80"} Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.927874 4820 scope.go:117] "RemoveContainer" containerID="4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.927878 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.949478 4820 scope.go:117] "RemoveContainer" containerID="b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.964913 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jr97f"] Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.975020 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jr97f"] Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.990534 4820 scope.go:117] "RemoveContainer" containerID="53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554" Feb 21 09:00:29 crc kubenswrapper[4820]: I0221 09:00:29.019024 4820 scope.go:117] "RemoveContainer" containerID="4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a" Feb 21 09:00:29 crc kubenswrapper[4820]: E0221 09:00:29.019469 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a\": container with ID starting with 4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a not found: ID does not exist" containerID="4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a" Feb 21 09:00:29 crc kubenswrapper[4820]: I0221 09:00:29.019519 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a"} err="failed to get container status \"4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a\": rpc error: code = NotFound desc = could not find container \"4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a\": container with ID starting with 4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a not found: ID does not exist" Feb 21 09:00:29 crc kubenswrapper[4820]: I0221 09:00:29.019552 4820 scope.go:117] "RemoveContainer" containerID="b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711" Feb 21 09:00:29 crc kubenswrapper[4820]: E0221 09:00:29.019994 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711\": container with ID starting with b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711 not found: ID does not exist" containerID="b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711" Feb 21 09:00:29 crc kubenswrapper[4820]: I0221 09:00:29.020033 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711"} err="failed to get container status \"b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711\": rpc error: code = NotFound desc = could not find container \"b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711\": container with ID starting with b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711 not found: ID does not exist" Feb 21 09:00:29 crc kubenswrapper[4820]: I0221 09:00:29.020057 4820 scope.go:117] "RemoveContainer" containerID="53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554" Feb 21 09:00:29 crc kubenswrapper[4820]: E0221 09:00:29.021273 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554\": container with ID starting with 53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554 not found: ID does not exist" containerID="53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554" Feb 21 09:00:29 crc kubenswrapper[4820]: I0221 09:00:29.021315 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554"} err="failed to get container status \"53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554\": rpc error: code = NotFound desc = could not find container \"53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554\": container with ID starting with 53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554 not found: ID does not exist" Feb 21 09:00:29 crc kubenswrapper[4820]: I0221 09:00:29.709539 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" path="/var/lib/kubelet/pods/a632316b-bf37-4d7e-8a47-1a4d453390bf/volumes" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.537715 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9f6vx"] Feb 21 09:00:30 crc kubenswrapper[4820]: E0221 09:00:30.538141 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="registry-server" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.538155 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="registry-server" Feb 21 09:00:30 crc kubenswrapper[4820]: E0221 09:00:30.538178 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="extract-utilities" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.538185 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="extract-utilities" Feb 21 09:00:30 crc kubenswrapper[4820]: E0221 09:00:30.538199 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="extract-content" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.538205 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="extract-content" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.538431 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="registry-server" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.539925 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.550127 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f6vx"] Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.688310 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f9vp\" (UniqueName: \"kubernetes.io/projected/e3d89d26-8978-4853-88af-a72edeee1b7a-kube-api-access-2f9vp\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.688415 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-utilities\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.688458 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-catalog-content\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.790556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f9vp\" (UniqueName: \"kubernetes.io/projected/e3d89d26-8978-4853-88af-a72edeee1b7a-kube-api-access-2f9vp\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.790758 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-utilities\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.791708 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-catalog-content\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.791717 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-utilities\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.792029 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-catalog-content\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.813924 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f9vp\" (UniqueName: \"kubernetes.io/projected/e3d89d26-8978-4853-88af-a72edeee1b7a-kube-api-access-2f9vp\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.858174 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:31 crc kubenswrapper[4820]: I0221 09:00:31.335832 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f6vx"] Feb 21 09:00:31 crc kubenswrapper[4820]: I0221 09:00:31.956418 4820 generic.go:334] "Generic (PLEG): container finished" podID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerID="61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9" exitCode=0 Feb 21 09:00:31 crc kubenswrapper[4820]: I0221 09:00:31.956469 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f6vx" event={"ID":"e3d89d26-8978-4853-88af-a72edeee1b7a","Type":"ContainerDied","Data":"61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9"} Feb 21 09:00:31 crc kubenswrapper[4820]: I0221 09:00:31.956520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f6vx" event={"ID":"e3d89d26-8978-4853-88af-a72edeee1b7a","Type":"ContainerStarted","Data":"d9f16c78e908c2b3ec7f01837bc4438493ae43583de8dbcab16b77756c1e4737"} Feb 21 09:00:32 crc kubenswrapper[4820]: I0221 09:00:32.966599 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f6vx" event={"ID":"e3d89d26-8978-4853-88af-a72edeee1b7a","Type":"ContainerStarted","Data":"53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25"} Feb 21 09:00:33 crc kubenswrapper[4820]: I0221 09:00:33.311423 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:33 crc kubenswrapper[4820]: I0221 09:00:33.358946 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:33 crc kubenswrapper[4820]: I0221 09:00:33.978449 4820 generic.go:334] "Generic (PLEG): container finished" podID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerID="53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25" exitCode=0 Feb 21 09:00:33 crc kubenswrapper[4820]: I0221 09:00:33.978522 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f6vx" event={"ID":"e3d89d26-8978-4853-88af-a72edeee1b7a","Type":"ContainerDied","Data":"53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25"} Feb 21 09:00:34 crc kubenswrapper[4820]: I0221 09:00:34.988649 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f6vx" event={"ID":"e3d89d26-8978-4853-88af-a72edeee1b7a","Type":"ContainerStarted","Data":"c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72"} Feb 21 09:00:35 crc kubenswrapper[4820]: I0221 09:00:35.011195 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9f6vx" podStartSLOduration=2.534222785 podStartE2EDuration="5.011176889s" podCreationTimestamp="2026-02-21 09:00:30 +0000 UTC" firstStartedPulling="2026-02-21 09:00:31.958304562 +0000 UTC m=+8006.991388760" lastFinishedPulling="2026-02-21 09:00:34.435258656 +0000 UTC m=+8009.468342864" observedRunningTime="2026-02-21 09:00:35.006043699 +0000 UTC m=+8010.039127887" watchObservedRunningTime="2026-02-21 09:00:35.011176889 +0000 UTC m=+8010.044261117" Feb 21 09:00:35 crc kubenswrapper[4820]: I0221 09:00:35.108691 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:35 crc kubenswrapper[4820]: I0221 09:00:35.930957 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tt9xg"] Feb 21 09:00:35 crc kubenswrapper[4820]: I0221 09:00:35.931497 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tt9xg" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="registry-server" containerID="cri-o://73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810" gracePeriod=2 Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.376333 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.502615 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-utilities\") pod \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.502791 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vljj8\" (UniqueName: \"kubernetes.io/projected/89494cdc-fddf-40b6-b3c2-31fd3d48810c-kube-api-access-vljj8\") pod \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.502827 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-catalog-content\") pod \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.503337 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-utilities" (OuterVolumeSpecName: "utilities") pod "89494cdc-fddf-40b6-b3c2-31fd3d48810c" (UID: "89494cdc-fddf-40b6-b3c2-31fd3d48810c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.503855 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.508558 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89494cdc-fddf-40b6-b3c2-31fd3d48810c-kube-api-access-vljj8" (OuterVolumeSpecName: "kube-api-access-vljj8") pod "89494cdc-fddf-40b6-b3c2-31fd3d48810c" (UID: "89494cdc-fddf-40b6-b3c2-31fd3d48810c"). InnerVolumeSpecName "kube-api-access-vljj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.560816 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89494cdc-fddf-40b6-b3c2-31fd3d48810c" (UID: "89494cdc-fddf-40b6-b3c2-31fd3d48810c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.606091 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.606130 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vljj8\" (UniqueName: \"kubernetes.io/projected/89494cdc-fddf-40b6-b3c2-31fd3d48810c-kube-api-access-vljj8\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.007975 4820 generic.go:334] "Generic (PLEG): container finished" podID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerID="73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810" exitCode=0 Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.008030 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xg" event={"ID":"89494cdc-fddf-40b6-b3c2-31fd3d48810c","Type":"ContainerDied","Data":"73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810"} Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.008048 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.008070 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xg" event={"ID":"89494cdc-fddf-40b6-b3c2-31fd3d48810c","Type":"ContainerDied","Data":"0e08f3f9df523ab78f42d321c8067aaecb631aa2e3579ca30e7e3b80406e5137"} Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.008091 4820 scope.go:117] "RemoveContainer" containerID="73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.026756 4820 scope.go:117] "RemoveContainer" containerID="1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.041209 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tt9xg"] Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.048869 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tt9xg"] Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.067186 4820 scope.go:117] "RemoveContainer" containerID="629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.093512 4820 scope.go:117] "RemoveContainer" containerID="73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810" Feb 21 09:00:37 crc kubenswrapper[4820]: E0221 09:00:37.094007 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810\": container with ID starting with 73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810 not found: ID does not exist" containerID="73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.094085 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810"} err="failed to get container status \"73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810\": rpc error: code = NotFound desc = could not find container \"73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810\": container with ID starting with 73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810 not found: ID does not exist" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.094118 4820 scope.go:117] "RemoveContainer" containerID="1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd" Feb 21 09:00:37 crc kubenswrapper[4820]: E0221 09:00:37.094459 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd\": container with ID starting with 1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd not found: ID does not exist" containerID="1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.094503 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd"} err="failed to get container status \"1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd\": rpc error: code = NotFound desc = could not find container \"1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd\": container with ID starting with 1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd not found: ID does not exist" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.094521 4820 scope.go:117] "RemoveContainer" containerID="629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97" Feb 21 09:00:37 crc kubenswrapper[4820]: E0221 09:00:37.094771 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97\": container with ID starting with 629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97 not found: ID does not exist" containerID="629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.094793 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97"} err="failed to get container status \"629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97\": rpc error: code = NotFound desc = could not find container \"629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97\": container with ID starting with 629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97 not found: ID does not exist" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.140623 4820 scope.go:117] "RemoveContainer" containerID="b04b97fcb09f93be41f1283cfb58d7e98542300a672bfd210a8873ecd384f3d2" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.706197 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" path="/var/lib/kubelet/pods/89494cdc-fddf-40b6-b3c2-31fd3d48810c/volumes" Feb 21 09:00:38 crc kubenswrapper[4820]: I0221 09:00:38.329948 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5km2z"] Feb 21 09:00:38 crc kubenswrapper[4820]: I0221 09:00:38.330202 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5km2z" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="registry-server" containerID="cri-o://ca476df6658c0fbcfd9d0b7befc9653f90adbbe97d7c122482f8191801241a3e" gracePeriod=2 Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.028352 4820 generic.go:334] "Generic (PLEG): container finished" podID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerID="ca476df6658c0fbcfd9d0b7befc9653f90adbbe97d7c122482f8191801241a3e" exitCode=0 Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.028397 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5km2z" event={"ID":"61f3a2fe-753d-4282-a97e-bd85b3116def","Type":"ContainerDied","Data":"ca476df6658c0fbcfd9d0b7befc9653f90adbbe97d7c122482f8191801241a3e"} Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.335592 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.463660 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-utilities\") pod \"61f3a2fe-753d-4282-a97e-bd85b3116def\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.463843 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz6n9\" (UniqueName: \"kubernetes.io/projected/61f3a2fe-753d-4282-a97e-bd85b3116def-kube-api-access-jz6n9\") pod \"61f3a2fe-753d-4282-a97e-bd85b3116def\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.463972 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-catalog-content\") pod \"61f3a2fe-753d-4282-a97e-bd85b3116def\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.464550 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-utilities" (OuterVolumeSpecName: "utilities") pod "61f3a2fe-753d-4282-a97e-bd85b3116def" (UID: "61f3a2fe-753d-4282-a97e-bd85b3116def"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.475131 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f3a2fe-753d-4282-a97e-bd85b3116def-kube-api-access-jz6n9" (OuterVolumeSpecName: "kube-api-access-jz6n9") pod "61f3a2fe-753d-4282-a97e-bd85b3116def" (UID: "61f3a2fe-753d-4282-a97e-bd85b3116def"). InnerVolumeSpecName "kube-api-access-jz6n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.511867 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61f3a2fe-753d-4282-a97e-bd85b3116def" (UID: "61f3a2fe-753d-4282-a97e-bd85b3116def"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.565806 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.565837 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz6n9\" (UniqueName: \"kubernetes.io/projected/61f3a2fe-753d-4282-a97e-bd85b3116def-kube-api-access-jz6n9\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.565858 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.038488 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5km2z" event={"ID":"61f3a2fe-753d-4282-a97e-bd85b3116def","Type":"ContainerDied","Data":"4bc72efadaeb7e04a89e9f8ca46762f345b7c0fdc83dde64218845eff06d2867"} Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.038819 4820 scope.go:117] "RemoveContainer" containerID="ca476df6658c0fbcfd9d0b7befc9653f90adbbe97d7c122482f8191801241a3e" Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.038599 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.066898 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5km2z"] Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.067312 4820 scope.go:117] "RemoveContainer" containerID="11153aa6e4379a1c93fb4c95a36fc0e8b0f3604fa2272fced65db179d0c212c6" Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.078598 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5km2z"] Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.093317 4820 scope.go:117] "RemoveContainer" containerID="72c9e4900e6ee50688f885f43a50c51001a88071dddf3cb2ab38cb306d156501" Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.859139 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.859538 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.901973 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:41 crc kubenswrapper[4820]: I0221 09:00:41.095621 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:41 crc kubenswrapper[4820]: I0221 09:00:41.712994 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" path="/var/lib/kubelet/pods/61f3a2fe-753d-4282-a97e-bd85b3116def/volumes" Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.331475 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f6vx"] Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.331956 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9f6vx" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="registry-server" containerID="cri-o://c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72" gracePeriod=2 Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.775606 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.871218 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-catalog-content\") pod \"e3d89d26-8978-4853-88af-a72edeee1b7a\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.871694 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f9vp\" (UniqueName: \"kubernetes.io/projected/e3d89d26-8978-4853-88af-a72edeee1b7a-kube-api-access-2f9vp\") pod \"e3d89d26-8978-4853-88af-a72edeee1b7a\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.871947 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-utilities\") pod \"e3d89d26-8978-4853-88af-a72edeee1b7a\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.872830 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-utilities" (OuterVolumeSpecName: "utilities") pod "e3d89d26-8978-4853-88af-a72edeee1b7a" (UID: "e3d89d26-8978-4853-88af-a72edeee1b7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.879342 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d89d26-8978-4853-88af-a72edeee1b7a-kube-api-access-2f9vp" (OuterVolumeSpecName: "kube-api-access-2f9vp") pod "e3d89d26-8978-4853-88af-a72edeee1b7a" (UID: "e3d89d26-8978-4853-88af-a72edeee1b7a"). InnerVolumeSpecName "kube-api-access-2f9vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.898774 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3d89d26-8978-4853-88af-a72edeee1b7a" (UID: "e3d89d26-8978-4853-88af-a72edeee1b7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.974591 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.974625 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f9vp\" (UniqueName: \"kubernetes.io/projected/e3d89d26-8978-4853-88af-a72edeee1b7a-kube-api-access-2f9vp\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.974638 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.084126 4820 generic.go:334] "Generic (PLEG): container finished" podID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerID="c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72" exitCode=0 Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.084185 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.084190 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f6vx" event={"ID":"e3d89d26-8978-4853-88af-a72edeee1b7a","Type":"ContainerDied","Data":"c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72"} Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.084253 4820 scope.go:117] "RemoveContainer" containerID="c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.084426 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f6vx" event={"ID":"e3d89d26-8978-4853-88af-a72edeee1b7a","Type":"ContainerDied","Data":"d9f16c78e908c2b3ec7f01837bc4438493ae43583de8dbcab16b77756c1e4737"} Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.105829 4820 scope.go:117] "RemoveContainer" containerID="53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.120145 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f6vx"] Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.129086 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f6vx"] Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.137537 4820 scope.go:117] "RemoveContainer" containerID="61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.173828 4820 scope.go:117] "RemoveContainer" containerID="c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72" Feb 21 09:00:45 crc kubenswrapper[4820]: E0221 09:00:45.174296 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72\": container with ID starting with c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72 not found: ID does not exist" containerID="c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.174334 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72"} err="failed to get container status \"c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72\": rpc error: code = NotFound desc = could not find container \"c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72\": container with ID starting with c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72 not found: ID does not exist" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.174359 4820 scope.go:117] "RemoveContainer" containerID="53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25" Feb 21 09:00:45 crc kubenswrapper[4820]: E0221 09:00:45.174759 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25\": container with ID starting with 53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25 not found: ID does not exist" containerID="53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.174789 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25"} err="failed to get container status \"53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25\": rpc error: code = NotFound desc = could not find container \"53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25\": container with ID starting with 53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25 not found: ID does not exist" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.174807 4820 scope.go:117] "RemoveContainer" containerID="61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9" Feb 21 09:00:45 crc kubenswrapper[4820]: E0221 09:00:45.175077 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9\": container with ID starting with 61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9 not found: ID does not exist" containerID="61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.175100 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9"} err="failed to get container status \"61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9\": rpc error: code = NotFound desc = could not find container \"61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9\": container with ID starting with 61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9 not found: ID does not exist" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.707427 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" path="/var/lib/kubelet/pods/e3d89d26-8978-4853-88af-a72edeee1b7a/volumes" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.156670 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29527741-49n79"] Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157510 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="extract-content" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157525 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="extract-content" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157535 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="extract-content" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157541 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="extract-content" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157552 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="extract-content" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157558 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="extract-content" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157572 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="extract-utilities" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157578 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="extract-utilities" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157591 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="extract-utilities" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157596 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="extract-utilities" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157608 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157614 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157627 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="extract-utilities" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157632 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="extract-utilities" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157643 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157648 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157668 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157674 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157853 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157871 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157900 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.158622 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.175331 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29527741-49n79"] Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.299058 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-combined-ca-bundle\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.299425 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-fernet-keys\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.299509 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pf82\" (UniqueName: \"kubernetes.io/projected/7c3e367e-0369-46eb-8886-a7d40b0a6626-kube-api-access-6pf82\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.299826 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-config-data\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.401490 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-config-data\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.401595 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-combined-ca-bundle\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.401634 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-fernet-keys\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.401797 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pf82\" (UniqueName: \"kubernetes.io/projected/7c3e367e-0369-46eb-8886-a7d40b0a6626-kube-api-access-6pf82\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.409713 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-config-data\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.412943 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-combined-ca-bundle\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.413222 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-fernet-keys\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.425524 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pf82\" (UniqueName: \"kubernetes.io/projected/7c3e367e-0369-46eb-8886-a7d40b0a6626-kube-api-access-6pf82\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.479958 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: W0221 09:01:00.910108 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c3e367e_0369_46eb_8886_a7d40b0a6626.slice/crio-7d2796cdaa07751ea17ecec7c22feac2ef3f7053e3d36cb4f111be7accabd500 WatchSource:0}: Error finding container 7d2796cdaa07751ea17ecec7c22feac2ef3f7053e3d36cb4f111be7accabd500: Status 404 returned error can't find the container with id 7d2796cdaa07751ea17ecec7c22feac2ef3f7053e3d36cb4f111be7accabd500 Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.913516 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29527741-49n79"] Feb 21 09:01:01 crc kubenswrapper[4820]: I0221 09:01:01.232110 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29527741-49n79" event={"ID":"7c3e367e-0369-46eb-8886-a7d40b0a6626","Type":"ContainerStarted","Data":"8314297c5b1616002e5ed95b6471d2b5cf2824be89387b21d10eebe341031605"} Feb 21 09:01:01 crc kubenswrapper[4820]: I0221 09:01:01.233953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29527741-49n79" event={"ID":"7c3e367e-0369-46eb-8886-a7d40b0a6626","Type":"ContainerStarted","Data":"7d2796cdaa07751ea17ecec7c22feac2ef3f7053e3d36cb4f111be7accabd500"} Feb 21 09:01:01 crc kubenswrapper[4820]: I0221 09:01:01.253604 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29527741-49n79" podStartSLOduration=1.253579555 podStartE2EDuration="1.253579555s" podCreationTimestamp="2026-02-21 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 09:01:01.246941636 +0000 UTC m=+8036.280025834" watchObservedRunningTime="2026-02-21 09:01:01.253579555 +0000 UTC m=+8036.286663753" Feb 21 09:01:04 crc kubenswrapper[4820]: I0221 09:01:04.260045 4820 generic.go:334] "Generic (PLEG): container finished" podID="7c3e367e-0369-46eb-8886-a7d40b0a6626" containerID="8314297c5b1616002e5ed95b6471d2b5cf2824be89387b21d10eebe341031605" exitCode=0 Feb 21 09:01:04 crc kubenswrapper[4820]: I0221 09:01:04.260168 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29527741-49n79" event={"ID":"7c3e367e-0369-46eb-8886-a7d40b0a6626","Type":"ContainerDied","Data":"8314297c5b1616002e5ed95b6471d2b5cf2824be89387b21d10eebe341031605"} Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.620186 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.707735 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-combined-ca-bundle\") pod \"7c3e367e-0369-46eb-8886-a7d40b0a6626\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.707835 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-config-data\") pod \"7c3e367e-0369-46eb-8886-a7d40b0a6626\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.707961 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-fernet-keys\") pod \"7c3e367e-0369-46eb-8886-a7d40b0a6626\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.708035 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pf82\" (UniqueName: \"kubernetes.io/projected/7c3e367e-0369-46eb-8886-a7d40b0a6626-kube-api-access-6pf82\") pod \"7c3e367e-0369-46eb-8886-a7d40b0a6626\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.712396 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3e367e-0369-46eb-8886-a7d40b0a6626-kube-api-access-6pf82" (OuterVolumeSpecName: "kube-api-access-6pf82") pod "7c3e367e-0369-46eb-8886-a7d40b0a6626" (UID: "7c3e367e-0369-46eb-8886-a7d40b0a6626"). InnerVolumeSpecName "kube-api-access-6pf82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.714182 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7c3e367e-0369-46eb-8886-a7d40b0a6626" (UID: "7c3e367e-0369-46eb-8886-a7d40b0a6626"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.736179 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c3e367e-0369-46eb-8886-a7d40b0a6626" (UID: "7c3e367e-0369-46eb-8886-a7d40b0a6626"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.760808 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-config-data" (OuterVolumeSpecName: "config-data") pod "7c3e367e-0369-46eb-8886-a7d40b0a6626" (UID: "7c3e367e-0369-46eb-8886-a7d40b0a6626"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.810881 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.810911 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.810921 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.810933 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pf82\" (UniqueName: \"kubernetes.io/projected/7c3e367e-0369-46eb-8886-a7d40b0a6626-kube-api-access-6pf82\") on node \"crc\" DevicePath \"\"" Feb 21 09:01:06 crc kubenswrapper[4820]: I0221 09:01:06.280281 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29527741-49n79" event={"ID":"7c3e367e-0369-46eb-8886-a7d40b0a6626","Type":"ContainerDied","Data":"7d2796cdaa07751ea17ecec7c22feac2ef3f7053e3d36cb4f111be7accabd500"} Feb 21 09:01:06 crc kubenswrapper[4820]: I0221 09:01:06.280332 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d2796cdaa07751ea17ecec7c22feac2ef3f7053e3d36cb4f111be7accabd500" Feb 21 09:01:06 crc kubenswrapper[4820]: I0221 09:01:06.280427 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:02:13 crc kubenswrapper[4820]: I0221 09:02:13.816446 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:02:13 crc kubenswrapper[4820]: I0221 09:02:13.816952 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:02:43 crc kubenswrapper[4820]: I0221 09:02:43.816004 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:02:43 crc kubenswrapper[4820]: I0221 09:02:43.816635 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:03:13 crc kubenswrapper[4820]: I0221 09:03:13.816200 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:03:13 crc kubenswrapper[4820]: I0221 09:03:13.816635 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:03:13 crc kubenswrapper[4820]: I0221 09:03:13.816677 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 09:03:13 crc kubenswrapper[4820]: I0221 09:03:13.817394 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 09:03:13 crc kubenswrapper[4820]: I0221 09:03:13.817441 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" gracePeriod=600 Feb 21 09:03:13 crc kubenswrapper[4820]: E0221 09:03:13.942375 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:03:14 crc kubenswrapper[4820]: I0221 09:03:14.275086 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" exitCode=0 Feb 21 09:03:14 crc kubenswrapper[4820]: I0221 09:03:14.275137 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018"} Feb 21 09:03:14 crc kubenswrapper[4820]: I0221 09:03:14.275180 4820 scope.go:117] "RemoveContainer" containerID="fab82e286429ea9948c3cdafd64bd6f5f6d2085b288f12fbb11541642982e3be" Feb 21 09:03:14 crc kubenswrapper[4820]: I0221 09:03:14.275879 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:03:14 crc kubenswrapper[4820]: E0221 09:03:14.276134 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:03:25 crc kubenswrapper[4820]: I0221 09:03:25.711076 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:03:25 crc kubenswrapper[4820]: E0221 09:03:25.712153 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:03:31 crc kubenswrapper[4820]: I0221 09:03:31.463654 4820 generic.go:334] "Generic (PLEG): container finished" podID="dab763aa-fd5e-41b2-96d8-f758ad76f779" containerID="32ed0675cc1bb9abf6513200d402c172eeaa97600b28f6d1ad6567f4b0f54be1" exitCode=0 Feb 21 09:03:31 crc kubenswrapper[4820]: I0221 09:03:31.463772 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" event={"ID":"dab763aa-fd5e-41b2-96d8-f758ad76f779","Type":"ContainerDied","Data":"32ed0675cc1bb9abf6513200d402c172eeaa97600b28f6d1ad6567f4b0f54be1"} Feb 21 09:03:32 crc kubenswrapper[4820]: I0221 09:03:32.928053 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.099413 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt2tm\" (UniqueName: \"kubernetes.io/projected/dab763aa-fd5e-41b2-96d8-f758ad76f779-kube-api-access-tt2tm\") pod \"dab763aa-fd5e-41b2-96d8-f758ad76f779\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.099548 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-1\") pod \"dab763aa-fd5e-41b2-96d8-f758ad76f779\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.099786 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ssh-key-openstack-cell1\") pod \"dab763aa-fd5e-41b2-96d8-f758ad76f779\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.099868 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-2\") pod \"dab763aa-fd5e-41b2-96d8-f758ad76f779\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.099954 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-telemetry-combined-ca-bundle\") pod \"dab763aa-fd5e-41b2-96d8-f758ad76f779\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.100032 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-inventory\") pod \"dab763aa-fd5e-41b2-96d8-f758ad76f779\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.100103 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-0\") pod \"dab763aa-fd5e-41b2-96d8-f758ad76f779\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.107475 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab763aa-fd5e-41b2-96d8-f758ad76f779-kube-api-access-tt2tm" (OuterVolumeSpecName: "kube-api-access-tt2tm") pod "dab763aa-fd5e-41b2-96d8-f758ad76f779" (UID: "dab763aa-fd5e-41b2-96d8-f758ad76f779"). InnerVolumeSpecName "kube-api-access-tt2tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.109090 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "dab763aa-fd5e-41b2-96d8-f758ad76f779" (UID: "dab763aa-fd5e-41b2-96d8-f758ad76f779"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.136660 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "dab763aa-fd5e-41b2-96d8-f758ad76f779" (UID: "dab763aa-fd5e-41b2-96d8-f758ad76f779"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.138322 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "dab763aa-fd5e-41b2-96d8-f758ad76f779" (UID: "dab763aa-fd5e-41b2-96d8-f758ad76f779"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.145732 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-inventory" (OuterVolumeSpecName: "inventory") pod "dab763aa-fd5e-41b2-96d8-f758ad76f779" (UID: "dab763aa-fd5e-41b2-96d8-f758ad76f779"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.152203 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "dab763aa-fd5e-41b2-96d8-f758ad76f779" (UID: "dab763aa-fd5e-41b2-96d8-f758ad76f779"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.158515 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "dab763aa-fd5e-41b2-96d8-f758ad76f779" (UID: "dab763aa-fd5e-41b2-96d8-f758ad76f779"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.202565 4820 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.202804 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.202879 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.202982 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt2tm\" (UniqueName: \"kubernetes.io/projected/dab763aa-fd5e-41b2-96d8-f758ad76f779-kube-api-access-tt2tm\") on node \"crc\" DevicePath \"\"" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.203053 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.203118 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.203289 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.493057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" event={"ID":"dab763aa-fd5e-41b2-96d8-f758ad76f779","Type":"ContainerDied","Data":"36c774a1836920a1c6a3632cabbb4951058702b6304881a9b065f60c5557314f"} Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.493341 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36c774a1836920a1c6a3632cabbb4951058702b6304881a9b065f60c5557314f" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.493149 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.659002 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-z2jbg"] Feb 21 09:03:33 crc kubenswrapper[4820]: E0221 09:03:33.659508 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3e367e-0369-46eb-8886-a7d40b0a6626" containerName="keystone-cron" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.659534 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3e367e-0369-46eb-8886-a7d40b0a6626" containerName="keystone-cron" Feb 21 09:03:33 crc kubenswrapper[4820]: E0221 09:03:33.659549 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab763aa-fd5e-41b2-96d8-f758ad76f779" containerName="telemetry-openstack-openstack-cell1" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.659557 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab763aa-fd5e-41b2-96d8-f758ad76f779" containerName="telemetry-openstack-openstack-cell1" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.659800 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c3e367e-0369-46eb-8886-a7d40b0a6626" containerName="keystone-cron" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.659833 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab763aa-fd5e-41b2-96d8-f758ad76f779" containerName="telemetry-openstack-openstack-cell1" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.660791 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.663986 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.664271 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.665758 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.666647 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.667014 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.019939 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.020003 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.020029 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.020073 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.020102 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w94tt\" (UniqueName: \"kubernetes.io/projected/f751ca69-8835-4c27-b4ab-9dac973aacd6-kube-api-access-w94tt\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.030173 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-z2jbg"] Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.121457 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.121528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w94tt\" (UniqueName: \"kubernetes.io/projected/f751ca69-8835-4c27-b4ab-9dac973aacd6-kube-api-access-w94tt\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.121661 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.121777 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.121806 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.125851 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.126192 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.126680 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.127217 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.137212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w94tt\" (UniqueName: \"kubernetes.io/projected/f751ca69-8835-4c27-b4ab-9dac973aacd6-kube-api-access-w94tt\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.305320 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.812100 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.816441 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-z2jbg"] Feb 21 09:03:35 crc kubenswrapper[4820]: I0221 09:03:35.516140 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" event={"ID":"f751ca69-8835-4c27-b4ab-9dac973aacd6","Type":"ContainerStarted","Data":"b65081ab1b23aa2859be07ab92b79121f6d0bff1798435348ffd99b0bf7bb83a"} Feb 21 09:03:36 crc kubenswrapper[4820]: I0221 09:03:36.529726 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" event={"ID":"f751ca69-8835-4c27-b4ab-9dac973aacd6","Type":"ContainerStarted","Data":"94581acefe6fd0d5e61b7cc4451c08195f92f5c85da82b73824c4c107abb0aa3"} Feb 21 09:03:36 crc kubenswrapper[4820]: I0221 09:03:36.554520 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" podStartSLOduration=3.102626996 podStartE2EDuration="3.554501465s" podCreationTimestamp="2026-02-21 09:03:33 +0000 UTC" firstStartedPulling="2026-02-21 09:03:34.811878634 +0000 UTC m=+8189.844962832" lastFinishedPulling="2026-02-21 09:03:35.263753103 +0000 UTC m=+8190.296837301" observedRunningTime="2026-02-21 09:03:36.545017616 +0000 UTC m=+8191.578101814" watchObservedRunningTime="2026-02-21 09:03:36.554501465 +0000 UTC m=+8191.587585663" Feb 21 09:03:37 crc kubenswrapper[4820]: I0221 09:03:37.697270 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:03:37 crc kubenswrapper[4820]: E0221 09:03:37.697615 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:03:48 crc kubenswrapper[4820]: I0221 09:03:48.697688 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:03:48 crc kubenswrapper[4820]: E0221 09:03:48.698478 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:03:59 crc kubenswrapper[4820]: I0221 09:03:59.697203 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:03:59 crc kubenswrapper[4820]: E0221 09:03:59.698072 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:04:13 crc kubenswrapper[4820]: I0221 09:04:13.697770 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:04:13 crc kubenswrapper[4820]: E0221 09:04:13.698987 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:04:28 crc kubenswrapper[4820]: I0221 09:04:28.696309 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:04:28 crc kubenswrapper[4820]: E0221 09:04:28.697168 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:04:33 crc kubenswrapper[4820]: I0221 09:04:33.098860 4820 generic.go:334] "Generic (PLEG): container finished" podID="f751ca69-8835-4c27-b4ab-9dac973aacd6" containerID="94581acefe6fd0d5e61b7cc4451c08195f92f5c85da82b73824c4c107abb0aa3" exitCode=0 Feb 21 09:04:33 crc kubenswrapper[4820]: I0221 09:04:33.098938 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" event={"ID":"f751ca69-8835-4c27-b4ab-9dac973aacd6","Type":"ContainerDied","Data":"94581acefe6fd0d5e61b7cc4451c08195f92f5c85da82b73824c4c107abb0aa3"} Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.756130 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.856905 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-combined-ca-bundle\") pod \"f751ca69-8835-4c27-b4ab-9dac973aacd6\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.857020 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-agent-neutron-config-0\") pod \"f751ca69-8835-4c27-b4ab-9dac973aacd6\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.857108 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-ssh-key-openstack-cell1\") pod \"f751ca69-8835-4c27-b4ab-9dac973aacd6\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.857144 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-inventory\") pod \"f751ca69-8835-4c27-b4ab-9dac973aacd6\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.857952 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94tt\" (UniqueName: \"kubernetes.io/projected/f751ca69-8835-4c27-b4ab-9dac973aacd6-kube-api-access-w94tt\") pod \"f751ca69-8835-4c27-b4ab-9dac973aacd6\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.864457 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "f751ca69-8835-4c27-b4ab-9dac973aacd6" (UID: "f751ca69-8835-4c27-b4ab-9dac973aacd6"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.866156 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f751ca69-8835-4c27-b4ab-9dac973aacd6-kube-api-access-w94tt" (OuterVolumeSpecName: "kube-api-access-w94tt") pod "f751ca69-8835-4c27-b4ab-9dac973aacd6" (UID: "f751ca69-8835-4c27-b4ab-9dac973aacd6"). InnerVolumeSpecName "kube-api-access-w94tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.891386 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-inventory" (OuterVolumeSpecName: "inventory") pod "f751ca69-8835-4c27-b4ab-9dac973aacd6" (UID: "f751ca69-8835-4c27-b4ab-9dac973aacd6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.892319 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "f751ca69-8835-4c27-b4ab-9dac973aacd6" (UID: "f751ca69-8835-4c27-b4ab-9dac973aacd6"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.895627 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "f751ca69-8835-4c27-b4ab-9dac973aacd6" (UID: "f751ca69-8835-4c27-b4ab-9dac973aacd6"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.960067 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.960110 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.960120 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.960132 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.960142 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w94tt\" (UniqueName: \"kubernetes.io/projected/f751ca69-8835-4c27-b4ab-9dac973aacd6-kube-api-access-w94tt\") on node \"crc\" DevicePath \"\"" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.138799 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" event={"ID":"f751ca69-8835-4c27-b4ab-9dac973aacd6","Type":"ContainerDied","Data":"b65081ab1b23aa2859be07ab92b79121f6d0bff1798435348ffd99b0bf7bb83a"} Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.138852 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b65081ab1b23aa2859be07ab92b79121f6d0bff1798435348ffd99b0bf7bb83a" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.138873 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.208735 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-brfqb"] Feb 21 09:04:35 crc kubenswrapper[4820]: E0221 09:04:35.209281 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f751ca69-8835-4c27-b4ab-9dac973aacd6" containerName="neutron-sriov-openstack-openstack-cell1" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.209305 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f751ca69-8835-4c27-b4ab-9dac973aacd6" containerName="neutron-sriov-openstack-openstack-cell1" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.211120 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f751ca69-8835-4c27-b4ab-9dac973aacd6" containerName="neutron-sriov-openstack-openstack-cell1" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.218829 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.223886 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.224119 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.224310 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.224489 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.224603 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.239954 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-brfqb"] Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.367220 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.367622 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trdg8\" (UniqueName: \"kubernetes.io/projected/0e84eaf9-2cd2-457c-b532-d632db99ba6e-kube-api-access-trdg8\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.367802 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.367924 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.368012 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.469397 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.469468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trdg8\" (UniqueName: \"kubernetes.io/projected/0e84eaf9-2cd2-457c-b532-d632db99ba6e-kube-api-access-trdg8\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.469530 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.469563 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.469640 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.473788 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.473858 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.475403 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.477025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.487213 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trdg8\" (UniqueName: \"kubernetes.io/projected/0e84eaf9-2cd2-457c-b532-d632db99ba6e-kube-api-access-trdg8\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.541267 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:36 crc kubenswrapper[4820]: I0221 09:04:36.091005 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-brfqb"] Feb 21 09:04:36 crc kubenswrapper[4820]: I0221 09:04:36.149200 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" event={"ID":"0e84eaf9-2cd2-457c-b532-d632db99ba6e","Type":"ContainerStarted","Data":"cb2fe4c4633138cc3b323da622f03723105033ded32247d9712201b94c948799"} Feb 21 09:04:37 crc kubenswrapper[4820]: I0221 09:04:37.159315 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" event={"ID":"0e84eaf9-2cd2-457c-b532-d632db99ba6e","Type":"ContainerStarted","Data":"c288b7a025e7d3b15f4932bf29ced80e553c63aa3651970cafa9392f6c8aaba1"} Feb 21 09:04:37 crc kubenswrapper[4820]: I0221 09:04:37.178515 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" podStartSLOduration=1.723226254 podStartE2EDuration="2.178496995s" podCreationTimestamp="2026-02-21 09:04:35 +0000 UTC" firstStartedPulling="2026-02-21 09:04:36.103431379 +0000 UTC m=+8251.136515587" lastFinishedPulling="2026-02-21 09:04:36.55870213 +0000 UTC m=+8251.591786328" observedRunningTime="2026-02-21 09:04:37.173676984 +0000 UTC m=+8252.206761182" watchObservedRunningTime="2026-02-21 09:04:37.178496995 +0000 UTC m=+8252.211581193" Feb 21 09:04:43 crc kubenswrapper[4820]: I0221 09:04:43.697354 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:04:43 crc kubenswrapper[4820]: E0221 09:04:43.698127 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:04:57 crc kubenswrapper[4820]: I0221 09:04:57.697770 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:04:57 crc kubenswrapper[4820]: E0221 09:04:57.698764 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:05:11 crc kubenswrapper[4820]: I0221 09:05:11.697306 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:05:11 crc kubenswrapper[4820]: E0221 09:05:11.698056 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:05:26 crc kubenswrapper[4820]: I0221 09:05:26.699669 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:05:26 crc kubenswrapper[4820]: E0221 09:05:26.700844 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:05:41 crc kubenswrapper[4820]: I0221 09:05:41.697315 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:05:41 crc kubenswrapper[4820]: E0221 09:05:41.698409 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:05:44 crc kubenswrapper[4820]: I0221 09:05:44.909162 4820 generic.go:334] "Generic (PLEG): container finished" podID="0e84eaf9-2cd2-457c-b532-d632db99ba6e" containerID="c288b7a025e7d3b15f4932bf29ced80e553c63aa3651970cafa9392f6c8aaba1" exitCode=0 Feb 21 09:05:44 crc kubenswrapper[4820]: I0221 09:05:44.909205 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" event={"ID":"0e84eaf9-2cd2-457c-b532-d632db99ba6e","Type":"ContainerDied","Data":"c288b7a025e7d3b15f4932bf29ced80e553c63aa3651970cafa9392f6c8aaba1"} Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.310740 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.436536 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-agent-neutron-config-0\") pod \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.436888 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trdg8\" (UniqueName: \"kubernetes.io/projected/0e84eaf9-2cd2-457c-b532-d632db99ba6e-kube-api-access-trdg8\") pod \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.437050 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-inventory\") pod \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.437359 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-ssh-key-openstack-cell1\") pod \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.437459 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-combined-ca-bundle\") pod \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.444492 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "0e84eaf9-2cd2-457c-b532-d632db99ba6e" (UID: "0e84eaf9-2cd2-457c-b532-d632db99ba6e"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.444790 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e84eaf9-2cd2-457c-b532-d632db99ba6e-kube-api-access-trdg8" (OuterVolumeSpecName: "kube-api-access-trdg8") pod "0e84eaf9-2cd2-457c-b532-d632db99ba6e" (UID: "0e84eaf9-2cd2-457c-b532-d632db99ba6e"). InnerVolumeSpecName "kube-api-access-trdg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.471687 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-inventory" (OuterVolumeSpecName: "inventory") pod "0e84eaf9-2cd2-457c-b532-d632db99ba6e" (UID: "0e84eaf9-2cd2-457c-b532-d632db99ba6e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.476046 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "0e84eaf9-2cd2-457c-b532-d632db99ba6e" (UID: "0e84eaf9-2cd2-457c-b532-d632db99ba6e"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.488359 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "0e84eaf9-2cd2-457c-b532-d632db99ba6e" (UID: "0e84eaf9-2cd2-457c-b532-d632db99ba6e"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.540812 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.541022 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.541105 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.541178 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trdg8\" (UniqueName: \"kubernetes.io/projected/0e84eaf9-2cd2-457c-b532-d632db99ba6e-kube-api-access-trdg8\") on node \"crc\" DevicePath \"\"" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.541280 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.930943 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" event={"ID":"0e84eaf9-2cd2-457c-b532-d632db99ba6e","Type":"ContainerDied","Data":"cb2fe4c4633138cc3b323da622f03723105033ded32247d9712201b94c948799"} Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.931002 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb2fe4c4633138cc3b323da622f03723105033ded32247d9712201b94c948799" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.931031 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:05:53 crc kubenswrapper[4820]: I0221 09:05:53.696796 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:05:53 crc kubenswrapper[4820]: E0221 09:05:53.697495 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:06:04 crc kubenswrapper[4820]: I0221 09:06:04.696995 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:06:04 crc kubenswrapper[4820]: E0221 09:06:04.697812 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.202169 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.202949 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="ff2505a3-9888-436f-9e92-045fb71aac57" containerName="nova-cell0-conductor-conductor" containerID="cri-o://b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e" gracePeriod=30 Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.701580 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.701818 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="bef3408d-c90c-48d8-85fa-366e68d6e66d" containerName="nova-cell1-conductor-conductor" containerID="cri-o://583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47" gracePeriod=30 Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.856435 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.856655 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="475239fa-3785-4704-bef1-f554cf694456" containerName="nova-scheduler-scheduler" containerID="cri-o://5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" gracePeriod=30 Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.930098 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.930616 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-log" containerID="cri-o://acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054" gracePeriod=30 Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.930731 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-api" containerID="cri-o://9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301" gracePeriod=30 Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.971466 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.972094 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-log" containerID="cri-o://f54650c953f71352ebf3663fefc2c46a1224cdbd7d75aace44661c3d5cae2261" gracePeriod=30 Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.972176 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-metadata" containerID="cri-o://3bc8a51d89a75337ed95a4da428a2c5cd89eada5282bff5c15d37e08160dc6cd" gracePeriod=30 Feb 21 09:06:13 crc kubenswrapper[4820]: I0221 09:06:13.189271 4820 generic.go:334] "Generic (PLEG): container finished" podID="febb41c5-cb59-4868-b57d-63b20b422240" containerID="acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054" exitCode=143 Feb 21 09:06:13 crc kubenswrapper[4820]: I0221 09:06:13.189335 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febb41c5-cb59-4868-b57d-63b20b422240","Type":"ContainerDied","Data":"acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054"} Feb 21 09:06:13 crc kubenswrapper[4820]: I0221 09:06:13.191892 4820 generic.go:334] "Generic (PLEG): container finished" podID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerID="f54650c953f71352ebf3663fefc2c46a1224cdbd7d75aace44661c3d5cae2261" exitCode=143 Feb 21 09:06:13 crc kubenswrapper[4820]: I0221 09:06:13.191930 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf","Type":"ContainerDied","Data":"f54650c953f71352ebf3663fefc2c46a1224cdbd7d75aace44661c3d5cae2261"} Feb 21 09:06:13 crc kubenswrapper[4820]: E0221 09:06:13.434548 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 09:06:13 crc kubenswrapper[4820]: E0221 09:06:13.436090 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 09:06:13 crc kubenswrapper[4820]: E0221 09:06:13.437414 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 09:06:13 crc kubenswrapper[4820]: E0221 09:06:13.437482 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="475239fa-3785-4704-bef1-f554cf694456" containerName="nova-scheduler-scheduler" Feb 21 09:06:13 crc kubenswrapper[4820]: I0221 09:06:13.964074 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.014717 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-combined-ca-bundle\") pod \"bef3408d-c90c-48d8-85fa-366e68d6e66d\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.014812 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbj92\" (UniqueName: \"kubernetes.io/projected/bef3408d-c90c-48d8-85fa-366e68d6e66d-kube-api-access-gbj92\") pod \"bef3408d-c90c-48d8-85fa-366e68d6e66d\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.014939 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-config-data\") pod \"bef3408d-c90c-48d8-85fa-366e68d6e66d\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.015668 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.032523 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bef3408d-c90c-48d8-85fa-366e68d6e66d-kube-api-access-gbj92" (OuterVolumeSpecName: "kube-api-access-gbj92") pod "bef3408d-c90c-48d8-85fa-366e68d6e66d" (UID: "bef3408d-c90c-48d8-85fa-366e68d6e66d"). InnerVolumeSpecName "kube-api-access-gbj92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.052515 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bef3408d-c90c-48d8-85fa-366e68d6e66d" (UID: "bef3408d-c90c-48d8-85fa-366e68d6e66d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.069517 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-config-data" (OuterVolumeSpecName: "config-data") pod "bef3408d-c90c-48d8-85fa-366e68d6e66d" (UID: "bef3408d-c90c-48d8-85fa-366e68d6e66d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.116775 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j8bm\" (UniqueName: \"kubernetes.io/projected/ff2505a3-9888-436f-9e92-045fb71aac57-kube-api-access-5j8bm\") pod \"ff2505a3-9888-436f-9e92-045fb71aac57\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.116829 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-combined-ca-bundle\") pod \"ff2505a3-9888-436f-9e92-045fb71aac57\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.117010 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-config-data\") pod \"ff2505a3-9888-436f-9e92-045fb71aac57\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.117423 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.117440 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.117451 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbj92\" (UniqueName: \"kubernetes.io/projected/bef3408d-c90c-48d8-85fa-366e68d6e66d-kube-api-access-gbj92\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.120336 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2505a3-9888-436f-9e92-045fb71aac57-kube-api-access-5j8bm" (OuterVolumeSpecName: "kube-api-access-5j8bm") pod "ff2505a3-9888-436f-9e92-045fb71aac57" (UID: "ff2505a3-9888-436f-9e92-045fb71aac57"). InnerVolumeSpecName "kube-api-access-5j8bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.143201 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-config-data" (OuterVolumeSpecName: "config-data") pod "ff2505a3-9888-436f-9e92-045fb71aac57" (UID: "ff2505a3-9888-436f-9e92-045fb71aac57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.144778 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff2505a3-9888-436f-9e92-045fb71aac57" (UID: "ff2505a3-9888-436f-9e92-045fb71aac57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.205740 4820 generic.go:334] "Generic (PLEG): container finished" podID="ff2505a3-9888-436f-9e92-045fb71aac57" containerID="b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e" exitCode=0 Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.205788 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ff2505a3-9888-436f-9e92-045fb71aac57","Type":"ContainerDied","Data":"b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e"} Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.205824 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ff2505a3-9888-436f-9e92-045fb71aac57","Type":"ContainerDied","Data":"e739d22e8a5fb67dd8a38933da1b7cdbf628d65d406c279afc479f8a5e13a79c"} Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.205830 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.205841 4820 scope.go:117] "RemoveContainer" containerID="b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.211749 4820 generic.go:334] "Generic (PLEG): container finished" podID="bef3408d-c90c-48d8-85fa-366e68d6e66d" containerID="583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47" exitCode=0 Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.211780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bef3408d-c90c-48d8-85fa-366e68d6e66d","Type":"ContainerDied","Data":"583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47"} Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.211803 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bef3408d-c90c-48d8-85fa-366e68d6e66d","Type":"ContainerDied","Data":"2a0a262a8895e6cca872ab1c86adcc262df33c931a7351df26a0b7545670d96f"} Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.211839 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.219113 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.219147 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j8bm\" (UniqueName: \"kubernetes.io/projected/ff2505a3-9888-436f-9e92-045fb71aac57-kube-api-access-5j8bm\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.219160 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.237185 4820 scope.go:117] "RemoveContainer" containerID="b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e" Feb 21 09:06:14 crc kubenswrapper[4820]: E0221 09:06:14.241508 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e\": container with ID starting with b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e not found: ID does not exist" containerID="b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.241564 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e"} err="failed to get container status \"b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e\": rpc error: code = NotFound desc = could not find container \"b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e\": container with ID starting with b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e not found: ID does not exist" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.241590 4820 scope.go:117] "RemoveContainer" containerID="583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.265320 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.283891 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.293748 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.300599 4820 scope.go:117] "RemoveContainer" containerID="583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.301907 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: E0221 09:06:14.303114 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47\": container with ID starting with 583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47 not found: ID does not exist" containerID="583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.303144 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47"} err="failed to get container status \"583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47\": rpc error: code = NotFound desc = could not find container \"583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47\": container with ID starting with 583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47 not found: ID does not exist" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.311326 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: E0221 09:06:14.311700 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e84eaf9-2cd2-457c-b532-d632db99ba6e" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.311716 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e84eaf9-2cd2-457c-b532-d632db99ba6e" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 21 09:06:14 crc kubenswrapper[4820]: E0221 09:06:14.311732 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bef3408d-c90c-48d8-85fa-366e68d6e66d" containerName="nova-cell1-conductor-conductor" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.311739 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bef3408d-c90c-48d8-85fa-366e68d6e66d" containerName="nova-cell1-conductor-conductor" Feb 21 09:06:14 crc kubenswrapper[4820]: E0221 09:06:14.311772 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2505a3-9888-436f-9e92-045fb71aac57" containerName="nova-cell0-conductor-conductor" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.311780 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2505a3-9888-436f-9e92-045fb71aac57" containerName="nova-cell0-conductor-conductor" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.311970 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2505a3-9888-436f-9e92-045fb71aac57" containerName="nova-cell0-conductor-conductor" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.311983 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e84eaf9-2cd2-457c-b532-d632db99ba6e" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.311993 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bef3408d-c90c-48d8-85fa-366e68d6e66d" containerName="nova-cell1-conductor-conductor" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.312731 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.315455 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.325876 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.327740 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.334035 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.345862 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.360864 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.424090 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de2ed9-8828-4c5e-af1e-24c752565d74-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.424149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdhwm\" (UniqueName: \"kubernetes.io/projected/1747f740-f880-4c19-817b-c9341c1179e7-kube-api-access-kdhwm\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.424180 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1747f740-f880-4c19-817b-c9341c1179e7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.424386 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4de2ed9-8828-4c5e-af1e-24c752565d74-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.424476 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1747f740-f880-4c19-817b-c9341c1179e7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.424587 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9f8p\" (UniqueName: \"kubernetes.io/projected/d4de2ed9-8828-4c5e-af1e-24c752565d74-kube-api-access-k9f8p\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.526189 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de2ed9-8828-4c5e-af1e-24c752565d74-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.526255 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdhwm\" (UniqueName: \"kubernetes.io/projected/1747f740-f880-4c19-817b-c9341c1179e7-kube-api-access-kdhwm\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.526282 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1747f740-f880-4c19-817b-c9341c1179e7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.526383 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4de2ed9-8828-4c5e-af1e-24c752565d74-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.526418 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1747f740-f880-4c19-817b-c9341c1179e7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.526465 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9f8p\" (UniqueName: \"kubernetes.io/projected/d4de2ed9-8828-4c5e-af1e-24c752565d74-kube-api-access-k9f8p\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.531202 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4de2ed9-8828-4c5e-af1e-24c752565d74-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.531599 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1747f740-f880-4c19-817b-c9341c1179e7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.542187 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1747f740-f880-4c19-817b-c9341c1179e7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.542436 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de2ed9-8828-4c5e-af1e-24c752565d74-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.546685 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9f8p\" (UniqueName: \"kubernetes.io/projected/d4de2ed9-8828-4c5e-af1e-24c752565d74-kube-api-access-k9f8p\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.551155 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdhwm\" (UniqueName: \"kubernetes.io/projected/1747f740-f880-4c19-817b-c9341c1179e7-kube-api-access-kdhwm\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.644295 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.655845 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:15 crc kubenswrapper[4820]: I0221 09:06:15.146501 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 09:06:15 crc kubenswrapper[4820]: I0221 09:06:15.167257 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 09:06:15 crc kubenswrapper[4820]: I0221 09:06:15.297785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1747f740-f880-4c19-817b-c9341c1179e7","Type":"ContainerStarted","Data":"e401112f74a2cfe1e3c2eab499282feb86545a3543e17a7461af78e50f166e4d"} Feb 21 09:06:15 crc kubenswrapper[4820]: I0221 09:06:15.298637 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d4de2ed9-8828-4c5e-af1e-24c752565d74","Type":"ContainerStarted","Data":"eb2350dbca4068fd3d7b5187ffdbbad53f077cf0583131b4e6e47c513ff87b58"} Feb 21 09:06:15 crc kubenswrapper[4820]: I0221 09:06:15.709612 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bef3408d-c90c-48d8-85fa-366e68d6e66d" path="/var/lib/kubelet/pods/bef3408d-c90c-48d8-85fa-366e68d6e66d/volumes" Feb 21 09:06:15 crc kubenswrapper[4820]: I0221 09:06:15.710751 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff2505a3-9888-436f-9e92-045fb71aac57" path="/var/lib/kubelet/pods/ff2505a3-9888-436f-9e92-045fb71aac57/volumes" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.327986 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d4de2ed9-8828-4c5e-af1e-24c752565d74","Type":"ContainerStarted","Data":"c916061f222d05c6361a31199f1e68c39c4f1b523bcfd2639057574a3efd1eff"} Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.328445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.331873 4820 generic.go:334] "Generic (PLEG): container finished" podID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerID="3bc8a51d89a75337ed95a4da428a2c5cd89eada5282bff5c15d37e08160dc6cd" exitCode=0 Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.331928 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf","Type":"ContainerDied","Data":"3bc8a51d89a75337ed95a4da428a2c5cd89eada5282bff5c15d37e08160dc6cd"} Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.337929 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1747f740-f880-4c19-817b-c9341c1179e7","Type":"ContainerStarted","Data":"a342203a671583fac859bf8833c1ba92fb31d1c27dfa3491077ab1360325af8b"} Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.338910 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.354017 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.353997486 podStartE2EDuration="2.353997486s" podCreationTimestamp="2026-02-21 09:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 09:06:16.352138675 +0000 UTC m=+8351.385222863" watchObservedRunningTime="2026-02-21 09:06:16.353997486 +0000 UTC m=+8351.387081674" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.355687 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.99:8774/\": read tcp 10.217.0.2:45938->10.217.1.99:8774: read: connection reset by peer" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.355723 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.99:8774/\": read tcp 10.217.0.2:45934->10.217.1.99:8774: read: connection reset by peer" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.581647 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.628859 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.62883347 podStartE2EDuration="2.62883347s" podCreationTimestamp="2026-02-21 09:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 09:06:16.373167167 +0000 UTC m=+8351.406251375" watchObservedRunningTime="2026-02-21 09:06:16.62883347 +0000 UTC m=+8351.661917668" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.678947 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57j5w\" (UniqueName: \"kubernetes.io/projected/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-kube-api-access-57j5w\") pod \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.679054 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-nova-metadata-tls-certs\") pod \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.679072 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-combined-ca-bundle\") pod \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.679126 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-config-data\") pod \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.679300 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-logs\") pod \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.680027 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-logs" (OuterVolumeSpecName: "logs") pod "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" (UID: "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.689905 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-kube-api-access-57j5w" (OuterVolumeSpecName: "kube-api-access-57j5w") pod "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" (UID: "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf"). InnerVolumeSpecName "kube-api-access-57j5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.752358 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" (UID: "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.774731 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" (UID: "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.780893 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-config-data" (OuterVolumeSpecName: "config-data") pod "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" (UID: "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.783603 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-logs\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.783625 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57j5w\" (UniqueName: \"kubernetes.io/projected/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-kube-api-access-57j5w\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.783636 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.783645 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.783654 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.816808 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.986897 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-config-data\") pod \"febb41c5-cb59-4868-b57d-63b20b422240\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.987060 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-internal-tls-certs\") pod \"febb41c5-cb59-4868-b57d-63b20b422240\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.987180 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-public-tls-certs\") pod \"febb41c5-cb59-4868-b57d-63b20b422240\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.987397 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kl5v\" (UniqueName: \"kubernetes.io/projected/febb41c5-cb59-4868-b57d-63b20b422240-kube-api-access-5kl5v\") pod \"febb41c5-cb59-4868-b57d-63b20b422240\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.987461 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-combined-ca-bundle\") pod \"febb41c5-cb59-4868-b57d-63b20b422240\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.987553 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febb41c5-cb59-4868-b57d-63b20b422240-logs\") pod \"febb41c5-cb59-4868-b57d-63b20b422240\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.990471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/febb41c5-cb59-4868-b57d-63b20b422240-logs" (OuterVolumeSpecName: "logs") pod "febb41c5-cb59-4868-b57d-63b20b422240" (UID: "febb41c5-cb59-4868-b57d-63b20b422240"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.995497 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/febb41c5-cb59-4868-b57d-63b20b422240-kube-api-access-5kl5v" (OuterVolumeSpecName: "kube-api-access-5kl5v") pod "febb41c5-cb59-4868-b57d-63b20b422240" (UID: "febb41c5-cb59-4868-b57d-63b20b422240"). InnerVolumeSpecName "kube-api-access-5kl5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.023005 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-config-data" (OuterVolumeSpecName: "config-data") pod "febb41c5-cb59-4868-b57d-63b20b422240" (UID: "febb41c5-cb59-4868-b57d-63b20b422240"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.036574 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "febb41c5-cb59-4868-b57d-63b20b422240" (UID: "febb41c5-cb59-4868-b57d-63b20b422240"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.040903 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "febb41c5-cb59-4868-b57d-63b20b422240" (UID: "febb41c5-cb59-4868-b57d-63b20b422240"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.061015 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "febb41c5-cb59-4868-b57d-63b20b422240" (UID: "febb41c5-cb59-4868-b57d-63b20b422240"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.090721 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kl5v\" (UniqueName: \"kubernetes.io/projected/febb41c5-cb59-4868-b57d-63b20b422240-kube-api-access-5kl5v\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.090780 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.090795 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febb41c5-cb59-4868-b57d-63b20b422240-logs\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.090809 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.090822 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.090836 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.348435 4820 generic.go:334] "Generic (PLEG): container finished" podID="febb41c5-cb59-4868-b57d-63b20b422240" containerID="9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301" exitCode=0 Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.348517 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.348603 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febb41c5-cb59-4868-b57d-63b20b422240","Type":"ContainerDied","Data":"9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301"} Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.348660 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febb41c5-cb59-4868-b57d-63b20b422240","Type":"ContainerDied","Data":"f84f25836fa8a5c0573e20405d3a79bd27bbd629ad136467d54a559c6258e788"} Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.348683 4820 scope.go:117] "RemoveContainer" containerID="9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.350952 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf","Type":"ContainerDied","Data":"693d0232eb9d3d5e0ecbe5f8fe7211549dd820f567608a73163321471ff6aae0"} Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.351115 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.381266 4820 scope.go:117] "RemoveContainer" containerID="acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.401180 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.409693 4820 scope.go:117] "RemoveContainer" containerID="9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301" Feb 21 09:06:17 crc kubenswrapper[4820]: E0221 09:06:17.410632 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301\": container with ID starting with 9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301 not found: ID does not exist" containerID="9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.410676 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301"} err="failed to get container status \"9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301\": rpc error: code = NotFound desc = could not find container \"9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301\": container with ID starting with 9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301 not found: ID does not exist" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.410708 4820 scope.go:117] "RemoveContainer" containerID="acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054" Feb 21 09:06:17 crc kubenswrapper[4820]: E0221 09:06:17.411530 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054\": container with ID starting with acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054 not found: ID does not exist" containerID="acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.411583 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054"} err="failed to get container status \"acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054\": rpc error: code = NotFound desc = could not find container \"acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054\": container with ID starting with acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054 not found: ID does not exist" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.411609 4820 scope.go:117] "RemoveContainer" containerID="3bc8a51d89a75337ed95a4da428a2c5cd89eada5282bff5c15d37e08160dc6cd" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.429034 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.440550 4820 scope.go:117] "RemoveContainer" containerID="f54650c953f71352ebf3663fefc2c46a1224cdbd7d75aace44661c3d5cae2261" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.441063 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.460966 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474025 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: E0221 09:06:17.474526 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-log" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474543 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-log" Feb 21 09:06:17 crc kubenswrapper[4820]: E0221 09:06:17.474566 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-log" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474573 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-log" Feb 21 09:06:17 crc kubenswrapper[4820]: E0221 09:06:17.474596 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-metadata" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474602 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-metadata" Feb 21 09:06:17 crc kubenswrapper[4820]: E0221 09:06:17.474614 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-api" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474621 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-api" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474785 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-log" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474801 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-metadata" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474819 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-api" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474833 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-log" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.475851 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.485702 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.486175 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.492091 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.505312 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.507138 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.509338 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.509466 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.509548 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.517209 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602353 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602406 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-config-data\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602429 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-config-data\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602576 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602714 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77c9db30-edab-4679-a671-15ae25d6448b-logs\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602921 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w6mq\" (UniqueName: \"kubernetes.io/projected/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-kube-api-access-2w6mq\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602952 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-logs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602995 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.603042 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-public-tls-certs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.603067 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvrfn\" (UniqueName: \"kubernetes.io/projected/77c9db30-edab-4679-a671-15ae25d6448b-kube-api-access-qvrfn\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.603363 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.705514 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.705923 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.705958 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-config-data\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.705975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-config-data\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.706026 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.706056 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77c9db30-edab-4679-a671-15ae25d6448b-logs\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.706183 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w6mq\" (UniqueName: \"kubernetes.io/projected/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-kube-api-access-2w6mq\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.706204 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-logs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.706259 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.706317 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-public-tls-certs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.706341 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvrfn\" (UniqueName: \"kubernetes.io/projected/77c9db30-edab-4679-a671-15ae25d6448b-kube-api-access-qvrfn\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.707219 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-logs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.707843 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77c9db30-edab-4679-a671-15ae25d6448b-logs\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.710798 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.712873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-config-data\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.714013 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.715330 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" path="/var/lib/kubelet/pods/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf/volumes" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.716591 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-public-tls-certs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.717113 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-config-data\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.717198 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.717654 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="febb41c5-cb59-4868-b57d-63b20b422240" path="/var/lib/kubelet/pods/febb41c5-cb59-4868-b57d-63b20b422240/volumes" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.720974 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.721739 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvrfn\" (UniqueName: \"kubernetes.io/projected/77c9db30-edab-4679-a671-15ae25d6448b-kube-api-access-qvrfn\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.724482 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w6mq\" (UniqueName: \"kubernetes.io/projected/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-kube-api-access-2w6mq\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.880892 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.892062 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.930933 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.019859 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-combined-ca-bundle\") pod \"475239fa-3785-4704-bef1-f554cf694456\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.020185 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-config-data\") pod \"475239fa-3785-4704-bef1-f554cf694456\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.020297 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4mt7\" (UniqueName: \"kubernetes.io/projected/475239fa-3785-4704-bef1-f554cf694456-kube-api-access-n4mt7\") pod \"475239fa-3785-4704-bef1-f554cf694456\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.033009 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/475239fa-3785-4704-bef1-f554cf694456-kube-api-access-n4mt7" (OuterVolumeSpecName: "kube-api-access-n4mt7") pod "475239fa-3785-4704-bef1-f554cf694456" (UID: "475239fa-3785-4704-bef1-f554cf694456"). InnerVolumeSpecName "kube-api-access-n4mt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.059870 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "475239fa-3785-4704-bef1-f554cf694456" (UID: "475239fa-3785-4704-bef1-f554cf694456"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.093343 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-config-data" (OuterVolumeSpecName: "config-data") pod "475239fa-3785-4704-bef1-f554cf694456" (UID: "475239fa-3785-4704-bef1-f554cf694456"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.127634 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.127667 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.127677 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4mt7\" (UniqueName: \"kubernetes.io/projected/475239fa-3785-4704-bef1-f554cf694456-kube-api-access-n4mt7\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.388349 4820 generic.go:334] "Generic (PLEG): container finished" podID="475239fa-3785-4704-bef1-f554cf694456" containerID="5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" exitCode=0 Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.388422 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.388477 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"475239fa-3785-4704-bef1-f554cf694456","Type":"ContainerDied","Data":"5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286"} Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.388527 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"475239fa-3785-4704-bef1-f554cf694456","Type":"ContainerDied","Data":"b9728440a68a14dc6808fd23c52f77370ca72000bc7bcb7fce2546c782ccca62"} Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.388547 4820 scope.go:117] "RemoveContainer" containerID="5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.424464 4820 scope.go:117] "RemoveContainer" containerID="5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" Feb 21 09:06:18 crc kubenswrapper[4820]: E0221 09:06:18.430800 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286\": container with ID starting with 5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286 not found: ID does not exist" containerID="5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.431106 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286"} err="failed to get container status \"5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286\": rpc error: code = NotFound desc = could not find container \"5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286\": container with ID starting with 5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286 not found: ID does not exist" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.461775 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.487014 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.518289 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.531972 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 09:06:18 crc kubenswrapper[4820]: E0221 09:06:18.532466 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475239fa-3785-4704-bef1-f554cf694456" containerName="nova-scheduler-scheduler" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.532483 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="475239fa-3785-4704-bef1-f554cf694456" containerName="nova-scheduler-scheduler" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.532677 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="475239fa-3785-4704-bef1-f554cf694456" containerName="nova-scheduler-scheduler" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.533519 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.555446 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.575835 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.640568 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1667b0-00cb-4768-97cb-de0ee527f829-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.640677 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1667b0-00cb-4768-97cb-de0ee527f829-config-data\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.640719 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs9r5\" (UniqueName: \"kubernetes.io/projected/4d1667b0-00cb-4768-97cb-de0ee527f829-kube-api-access-zs9r5\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.691147 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.699038 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:06:18 crc kubenswrapper[4820]: E0221 09:06:18.699266 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.742528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1667b0-00cb-4768-97cb-de0ee527f829-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.742626 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1667b0-00cb-4768-97cb-de0ee527f829-config-data\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.742666 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs9r5\" (UniqueName: \"kubernetes.io/projected/4d1667b0-00cb-4768-97cb-de0ee527f829-kube-api-access-zs9r5\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.760367 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1667b0-00cb-4768-97cb-de0ee527f829-config-data\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.774873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1667b0-00cb-4768-97cb-de0ee527f829-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.785113 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs9r5\" (UniqueName: \"kubernetes.io/projected/4d1667b0-00cb-4768-97cb-de0ee527f829-kube-api-access-zs9r5\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.885665 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.359647 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 09:06:19 crc kubenswrapper[4820]: W0221 09:06:19.362697 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d1667b0_00cb_4768_97cb_de0ee527f829.slice/crio-86c2dd15af493902bd804940199fc8d78e1a4be5b320ee1c4cbd8494cfe974c0 WatchSource:0}: Error finding container 86c2dd15af493902bd804940199fc8d78e1a4be5b320ee1c4cbd8494cfe974c0: Status 404 returned error can't find the container with id 86c2dd15af493902bd804940199fc8d78e1a4be5b320ee1c4cbd8494cfe974c0 Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.404313 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"77c9db30-edab-4679-a671-15ae25d6448b","Type":"ContainerStarted","Data":"90db96dd0b8cc6de40f50614565a48c9546bd2da2255453fcb933fa7bbaf4de2"} Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.404367 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"77c9db30-edab-4679-a671-15ae25d6448b","Type":"ContainerStarted","Data":"890630f2409d6d8a71452e5106fa511add930de6350c38167e1a89bd0d53903d"} Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.404381 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"77c9db30-edab-4679-a671-15ae25d6448b","Type":"ContainerStarted","Data":"fbaaca88dbffda7c47585c253c606ae5fdd51f9eae104cdb70b1cbf1a100091e"} Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.419128 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d1667b0-00cb-4768-97cb-de0ee527f829","Type":"ContainerStarted","Data":"86c2dd15af493902bd804940199fc8d78e1a4be5b320ee1c4cbd8494cfe974c0"} Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.430154 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eae0a5ff-41ba-4522-a7f0-e69ff23ee566","Type":"ContainerStarted","Data":"0afa42237db1caf0aa5af9a485d9edeb1e300defb8d0dd5c9606d344cdfa2116"} Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.430197 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eae0a5ff-41ba-4522-a7f0-e69ff23ee566","Type":"ContainerStarted","Data":"b862c7f4100cdf40e3f49957cf7d67764dbefbfaf926027a7198f4d58f541147"} Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.430210 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eae0a5ff-41ba-4522-a7f0-e69ff23ee566","Type":"ContainerStarted","Data":"7d4cbefd15d015c5d34edbde0d8787d31158158f9304859821966d9653463e90"} Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.433215 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.433195814 podStartE2EDuration="2.433195814s" podCreationTimestamp="2026-02-21 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 09:06:19.428657911 +0000 UTC m=+8354.461742129" watchObservedRunningTime="2026-02-21 09:06:19.433195814 +0000 UTC m=+8354.466280012" Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.471155 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.471132205 podStartE2EDuration="2.471132205s" podCreationTimestamp="2026-02-21 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 09:06:19.452109658 +0000 UTC m=+8354.485193856" watchObservedRunningTime="2026-02-21 09:06:19.471132205 +0000 UTC m=+8354.504216393" Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.709389 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="475239fa-3785-4704-bef1-f554cf694456" path="/var/lib/kubelet/pods/475239fa-3785-4704-bef1-f554cf694456/volumes" Feb 21 09:06:20 crc kubenswrapper[4820]: I0221 09:06:20.446904 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d1667b0-00cb-4768-97cb-de0ee527f829","Type":"ContainerStarted","Data":"02b24a320803b1dc026e2d46c72eb57f0dbd9ec5eab126940cd196ecb0f9db78"} Feb 21 09:06:20 crc kubenswrapper[4820]: I0221 09:06:20.476474 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.476453085 podStartE2EDuration="2.476453085s" podCreationTimestamp="2026-02-21 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 09:06:20.467136401 +0000 UTC m=+8355.500220599" watchObservedRunningTime="2026-02-21 09:06:20.476453085 +0000 UTC m=+8355.509537283" Feb 21 09:06:22 crc kubenswrapper[4820]: I0221 09:06:22.881462 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 09:06:22 crc kubenswrapper[4820]: I0221 09:06:22.881837 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 09:06:23 crc kubenswrapper[4820]: I0221 09:06:23.886399 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 21 09:06:24 crc kubenswrapper[4820]: I0221 09:06:24.677440 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:24 crc kubenswrapper[4820]: I0221 09:06:24.693081 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:27 crc kubenswrapper[4820]: I0221 09:06:27.881820 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 09:06:27 crc kubenswrapper[4820]: I0221 09:06:27.882174 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 09:06:27 crc kubenswrapper[4820]: I0221 09:06:27.892467 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 09:06:27 crc kubenswrapper[4820]: I0221 09:06:27.892512 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 09:06:28 crc kubenswrapper[4820]: I0221 09:06:28.886339 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 21 09:06:28 crc kubenswrapper[4820]: I0221 09:06:28.895480 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="77c9db30-edab-4679-a671-15ae25d6448b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.183:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 09:06:28 crc kubenswrapper[4820]: I0221 09:06:28.895510 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="77c9db30-edab-4679-a671-15ae25d6448b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.183:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 09:06:28 crc kubenswrapper[4820]: I0221 09:06:28.907449 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eae0a5ff-41ba-4522-a7f0-e69ff23ee566" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 09:06:28 crc kubenswrapper[4820]: I0221 09:06:28.907464 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eae0a5ff-41ba-4522-a7f0-e69ff23ee566" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 09:06:28 crc kubenswrapper[4820]: I0221 09:06:28.926833 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 21 09:06:29 crc kubenswrapper[4820]: I0221 09:06:29.567215 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 21 09:06:33 crc kubenswrapper[4820]: I0221 09:06:33.697132 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:06:33 crc kubenswrapper[4820]: E0221 09:06:33.697445 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:06:37 crc kubenswrapper[4820]: I0221 09:06:37.886605 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 09:06:37 crc kubenswrapper[4820]: I0221 09:06:37.888470 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 09:06:37 crc kubenswrapper[4820]: I0221 09:06:37.891232 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 09:06:37 crc kubenswrapper[4820]: I0221 09:06:37.898108 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 09:06:37 crc kubenswrapper[4820]: I0221 09:06:37.898498 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 09:06:37 crc kubenswrapper[4820]: I0221 09:06:37.900031 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 09:06:37 crc kubenswrapper[4820]: I0221 09:06:37.905445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 09:06:38 crc kubenswrapper[4820]: I0221 09:06:38.623477 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 09:06:38 crc kubenswrapper[4820]: I0221 09:06:38.628615 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 09:06:38 crc kubenswrapper[4820]: I0221 09:06:38.633085 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.968916 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5"] Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.971054 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.973770 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.975014 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.975106 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.975261 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.975293 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.975317 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.975377 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.977927 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5"] Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096268 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096328 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096391 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096421 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096464 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096489 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096505 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096569 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096614 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096668 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdj4\" (UniqueName: \"kubernetes.io/projected/2666b573-2e76-4374-9fd9-39ac7aabddef-kube-api-access-7zdj4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096846 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.198757 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.198848 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.198890 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zdj4\" (UniqueName: \"kubernetes.io/projected/2666b573-2e76-4374-9fd9-39ac7aabddef-kube-api-access-7zdj4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.198942 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.199051 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.199078 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.199134 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.199172 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.199216 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.199271 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.199295 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.200559 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.205479 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.205505 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.205509 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.206470 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.207153 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.207814 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.207829 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.208434 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.209185 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.223101 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zdj4\" (UniqueName: \"kubernetes.io/projected/2666b573-2e76-4374-9fd9-39ac7aabddef-kube-api-access-7zdj4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.290551 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: W0221 09:06:40.853493 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2666b573_2e76_4374_9fd9_39ac7aabddef.slice/crio-0eb78463b2f50530730496f209ecd44b1ce385dad5b54014a0f9ce0eeb68352f WatchSource:0}: Error finding container 0eb78463b2f50530730496f209ecd44b1ce385dad5b54014a0f9ce0eeb68352f: Status 404 returned error can't find the container with id 0eb78463b2f50530730496f209ecd44b1ce385dad5b54014a0f9ce0eeb68352f Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.853511 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5"] Feb 21 09:06:41 crc kubenswrapper[4820]: I0221 09:06:41.654418 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" event={"ID":"2666b573-2e76-4374-9fd9-39ac7aabddef","Type":"ContainerStarted","Data":"eacd316e3479702ece893fba6b1dd2ceabf934c321e66004e672da9c73a4e841"} Feb 21 09:06:41 crc kubenswrapper[4820]: I0221 09:06:41.654663 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" event={"ID":"2666b573-2e76-4374-9fd9-39ac7aabddef","Type":"ContainerStarted","Data":"0eb78463b2f50530730496f209ecd44b1ce385dad5b54014a0f9ce0eeb68352f"} Feb 21 09:06:41 crc kubenswrapper[4820]: I0221 09:06:41.680387 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" podStartSLOduration=2.197717235 podStartE2EDuration="2.68036394s" podCreationTimestamp="2026-02-21 09:06:39 +0000 UTC" firstStartedPulling="2026-02-21 09:06:40.857861073 +0000 UTC m=+8375.890945271" lastFinishedPulling="2026-02-21 09:06:41.340507778 +0000 UTC m=+8376.373591976" observedRunningTime="2026-02-21 09:06:41.677884553 +0000 UTC m=+8376.710968751" watchObservedRunningTime="2026-02-21 09:06:41.68036394 +0000 UTC m=+8376.713448138" Feb 21 09:06:45 crc kubenswrapper[4820]: I0221 09:06:45.713993 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:06:45 crc kubenswrapper[4820]: E0221 09:06:45.716987 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:07:00 crc kubenswrapper[4820]: I0221 09:07:00.697130 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:07:00 crc kubenswrapper[4820]: E0221 09:07:00.698216 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:07:11 crc kubenswrapper[4820]: I0221 09:07:11.697015 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:07:11 crc kubenswrapper[4820]: E0221 09:07:11.698215 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:07:24 crc kubenswrapper[4820]: I0221 09:07:24.697102 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:07:24 crc kubenswrapper[4820]: E0221 09:07:24.698495 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:07:36 crc kubenswrapper[4820]: I0221 09:07:36.696720 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:07:36 crc kubenswrapper[4820]: E0221 09:07:36.697714 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:07:50 crc kubenswrapper[4820]: I0221 09:07:50.696515 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:07:50 crc kubenswrapper[4820]: E0221 09:07:50.697646 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:08:05 crc kubenswrapper[4820]: I0221 09:08:05.707056 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:08:05 crc kubenswrapper[4820]: E0221 09:08:05.708034 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:08:17 crc kubenswrapper[4820]: I0221 09:08:17.697576 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:08:18 crc kubenswrapper[4820]: I0221 09:08:18.674738 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"00f2ad976a303bdff7905db0927834eff5c6d0654e866e9f66c57afad544d05b"} Feb 21 09:10:04 crc kubenswrapper[4820]: I0221 09:10:04.748038 4820 generic.go:334] "Generic (PLEG): container finished" podID="2666b573-2e76-4374-9fd9-39ac7aabddef" containerID="eacd316e3479702ece893fba6b1dd2ceabf934c321e66004e672da9c73a4e841" exitCode=0 Feb 21 09:10:04 crc kubenswrapper[4820]: I0221 09:10:04.748133 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" event={"ID":"2666b573-2e76-4374-9fd9-39ac7aabddef","Type":"ContainerDied","Data":"eacd316e3479702ece893fba6b1dd2ceabf934c321e66004e672da9c73a4e841"} Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.190883 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379129 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-2\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379470 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-combined-ca-bundle\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379593 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zdj4\" (UniqueName: \"kubernetes.io/projected/2666b573-2e76-4374-9fd9-39ac7aabddef-kube-api-access-7zdj4\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-inventory\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379752 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-0\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379832 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-0\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379899 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-1\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379927 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cells-global-config-0\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379956 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-1\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.380003 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-ssh-key-openstack-cell1\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.380104 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-3\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.386202 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2666b573-2e76-4374-9fd9-39ac7aabddef-kube-api-access-7zdj4" (OuterVolumeSpecName: "kube-api-access-7zdj4") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "kube-api-access-7zdj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.397085 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.409756 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.411280 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.412653 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.413959 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.423740 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.424200 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-inventory" (OuterVolumeSpecName: "inventory") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.424321 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.431895 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.434931 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483759 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483803 4820 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483813 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483824 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483837 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483847 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483859 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zdj4\" (UniqueName: \"kubernetes.io/projected/2666b573-2e76-4374-9fd9-39ac7aabddef-kube-api-access-7zdj4\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483871 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483880 4820 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483892 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483903 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.766795 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" event={"ID":"2666b573-2e76-4374-9fd9-39ac7aabddef","Type":"ContainerDied","Data":"0eb78463b2f50530730496f209ecd44b1ce385dad5b54014a0f9ce0eeb68352f"} Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.766831 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eb78463b2f50530730496f209ecd44b1ce385dad5b54014a0f9ce0eeb68352f" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.766845 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:10:25 crc kubenswrapper[4820]: E0221 09:10:25.594230 4820 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.201:49456->38.102.83.201:43255: write tcp 38.102.83.201:49456->38.102.83.201:43255: write: broken pipe Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.070655 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xnscq"] Feb 21 09:10:35 crc kubenswrapper[4820]: E0221 09:10:35.071621 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2666b573-2e76-4374-9fd9-39ac7aabddef" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.071638 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2666b573-2e76-4374-9fd9-39ac7aabddef" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.071849 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2666b573-2e76-4374-9fd9-39ac7aabddef" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.073206 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.085623 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xnscq"] Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.219504 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-catalog-content\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.219808 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssn9l\" (UniqueName: \"kubernetes.io/projected/7f67a876-9f79-4e98-98d9-c8f80940528f-kube-api-access-ssn9l\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.220002 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-utilities\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.322195 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssn9l\" (UniqueName: \"kubernetes.io/projected/7f67a876-9f79-4e98-98d9-c8f80940528f-kube-api-access-ssn9l\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.322309 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-utilities\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.322461 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-catalog-content\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.323217 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-catalog-content\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.323206 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-utilities\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.348285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssn9l\" (UniqueName: \"kubernetes.io/projected/7f67a876-9f79-4e98-98d9-c8f80940528f-kube-api-access-ssn9l\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.395975 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.963621 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xnscq"] Feb 21 09:10:36 crc kubenswrapper[4820]: I0221 09:10:36.058458 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnscq" event={"ID":"7f67a876-9f79-4e98-98d9-c8f80940528f","Type":"ContainerStarted","Data":"3b55eef8026f41e4a67d5df7b619e17617d87f8da719b1494c48d75452b2a6af"} Feb 21 09:10:37 crc kubenswrapper[4820]: I0221 09:10:37.069954 4820 generic.go:334] "Generic (PLEG): container finished" podID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerID="5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7" exitCode=0 Feb 21 09:10:37 crc kubenswrapper[4820]: I0221 09:10:37.070046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnscq" event={"ID":"7f67a876-9f79-4e98-98d9-c8f80940528f","Type":"ContainerDied","Data":"5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7"} Feb 21 09:10:37 crc kubenswrapper[4820]: I0221 09:10:37.073340 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 09:10:38 crc kubenswrapper[4820]: I0221 09:10:38.082817 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnscq" event={"ID":"7f67a876-9f79-4e98-98d9-c8f80940528f","Type":"ContainerStarted","Data":"2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9"} Feb 21 09:10:39 crc kubenswrapper[4820]: I0221 09:10:39.097681 4820 generic.go:334] "Generic (PLEG): container finished" podID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerID="2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9" exitCode=0 Feb 21 09:10:39 crc kubenswrapper[4820]: I0221 09:10:39.097909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnscq" event={"ID":"7f67a876-9f79-4e98-98d9-c8f80940528f","Type":"ContainerDied","Data":"2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9"} Feb 21 09:10:40 crc kubenswrapper[4820]: I0221 09:10:40.113499 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnscq" event={"ID":"7f67a876-9f79-4e98-98d9-c8f80940528f","Type":"ContainerStarted","Data":"6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f"} Feb 21 09:10:40 crc kubenswrapper[4820]: I0221 09:10:40.142022 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xnscq" podStartSLOduration=2.739347097 podStartE2EDuration="5.142004186s" podCreationTimestamp="2026-02-21 09:10:35 +0000 UTC" firstStartedPulling="2026-02-21 09:10:37.072984325 +0000 UTC m=+8612.106068523" lastFinishedPulling="2026-02-21 09:10:39.475641414 +0000 UTC m=+8614.508725612" observedRunningTime="2026-02-21 09:10:40.138494901 +0000 UTC m=+8615.171579129" watchObservedRunningTime="2026-02-21 09:10:40.142004186 +0000 UTC m=+8615.175088384" Feb 21 09:10:43 crc kubenswrapper[4820]: I0221 09:10:43.816551 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:10:43 crc kubenswrapper[4820]: I0221 09:10:43.816946 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:10:45 crc kubenswrapper[4820]: I0221 09:10:45.396878 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:45 crc kubenswrapper[4820]: I0221 09:10:45.398626 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:45 crc kubenswrapper[4820]: I0221 09:10:45.452813 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:46 crc kubenswrapper[4820]: I0221 09:10:46.232949 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:46 crc kubenswrapper[4820]: I0221 09:10:46.301546 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xnscq"] Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.200104 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xnscq" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="registry-server" containerID="cri-o://6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f" gracePeriod=2 Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.708890 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.864854 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-catalog-content\") pod \"7f67a876-9f79-4e98-98d9-c8f80940528f\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.864903 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssn9l\" (UniqueName: \"kubernetes.io/projected/7f67a876-9f79-4e98-98d9-c8f80940528f-kube-api-access-ssn9l\") pod \"7f67a876-9f79-4e98-98d9-c8f80940528f\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.864977 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-utilities\") pod \"7f67a876-9f79-4e98-98d9-c8f80940528f\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.867853 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-utilities" (OuterVolumeSpecName: "utilities") pod "7f67a876-9f79-4e98-98d9-c8f80940528f" (UID: "7f67a876-9f79-4e98-98d9-c8f80940528f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.873626 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f67a876-9f79-4e98-98d9-c8f80940528f-kube-api-access-ssn9l" (OuterVolumeSpecName: "kube-api-access-ssn9l") pod "7f67a876-9f79-4e98-98d9-c8f80940528f" (UID: "7f67a876-9f79-4e98-98d9-c8f80940528f"). InnerVolumeSpecName "kube-api-access-ssn9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.923878 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f67a876-9f79-4e98-98d9-c8f80940528f" (UID: "7f67a876-9f79-4e98-98d9-c8f80940528f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.969033 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.969099 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssn9l\" (UniqueName: \"kubernetes.io/projected/7f67a876-9f79-4e98-98d9-c8f80940528f-kube-api-access-ssn9l\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.969116 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.214541 4820 generic.go:334] "Generic (PLEG): container finished" podID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerID="6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f" exitCode=0 Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.214673 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnscq" event={"ID":"7f67a876-9f79-4e98-98d9-c8f80940528f","Type":"ContainerDied","Data":"6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f"} Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.214899 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnscq" event={"ID":"7f67a876-9f79-4e98-98d9-c8f80940528f","Type":"ContainerDied","Data":"3b55eef8026f41e4a67d5df7b619e17617d87f8da719b1494c48d75452b2a6af"} Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.214928 4820 scope.go:117] "RemoveContainer" containerID="6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.214721 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.239709 4820 scope.go:117] "RemoveContainer" containerID="2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.269344 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xnscq"] Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.307819 4820 scope.go:117] "RemoveContainer" containerID="5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.336156 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xnscq"] Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.340563 4820 scope.go:117] "RemoveContainer" containerID="6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f" Feb 21 09:10:49 crc kubenswrapper[4820]: E0221 09:10:49.341327 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f\": container with ID starting with 6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f not found: ID does not exist" containerID="6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.341418 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f"} err="failed to get container status \"6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f\": rpc error: code = NotFound desc = could not find container \"6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f\": container with ID starting with 6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f not found: ID does not exist" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.341467 4820 scope.go:117] "RemoveContainer" containerID="2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9" Feb 21 09:10:49 crc kubenswrapper[4820]: E0221 09:10:49.341922 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9\": container with ID starting with 2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9 not found: ID does not exist" containerID="2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.341976 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9"} err="failed to get container status \"2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9\": rpc error: code = NotFound desc = could not find container \"2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9\": container with ID starting with 2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9 not found: ID does not exist" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.342008 4820 scope.go:117] "RemoveContainer" containerID="5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7" Feb 21 09:10:49 crc kubenswrapper[4820]: E0221 09:10:49.342415 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7\": container with ID starting with 5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7 not found: ID does not exist" containerID="5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.342453 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7"} err="failed to get container status \"5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7\": rpc error: code = NotFound desc = could not find container \"5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7\": container with ID starting with 5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7 not found: ID does not exist" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.717576 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" path="/var/lib/kubelet/pods/7f67a876-9f79-4e98-98d9-c8f80940528f/volumes" Feb 21 09:11:13 crc kubenswrapper[4820]: I0221 09:11:13.815934 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:11:13 crc kubenswrapper[4820]: I0221 09:11:13.816787 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.807159 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c67xc"] Feb 21 09:11:25 crc kubenswrapper[4820]: E0221 09:11:25.808678 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="registry-server" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.808703 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="registry-server" Feb 21 09:11:25 crc kubenswrapper[4820]: E0221 09:11:25.808727 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="extract-utilities" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.808739 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="extract-utilities" Feb 21 09:11:25 crc kubenswrapper[4820]: E0221 09:11:25.808775 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="extract-content" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.808788 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="extract-content" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.809107 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="registry-server" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.818144 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.847888 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c67xc"] Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.969481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-catalog-content\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.969827 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-utilities\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.969920 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hwjs\" (UniqueName: \"kubernetes.io/projected/18749304-4042-46e9-8641-963815f5659c-kube-api-access-2hwjs\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.071838 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-utilities\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.071911 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hwjs\" (UniqueName: \"kubernetes.io/projected/18749304-4042-46e9-8641-963815f5659c-kube-api-access-2hwjs\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.072005 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-catalog-content\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.072584 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-utilities\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.072609 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-catalog-content\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.098874 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hwjs\" (UniqueName: \"kubernetes.io/projected/18749304-4042-46e9-8641-963815f5659c-kube-api-access-2hwjs\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.158689 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.661251 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c67xc"] Feb 21 09:11:27 crc kubenswrapper[4820]: I0221 09:11:27.619018 4820 generic.go:334] "Generic (PLEG): container finished" podID="18749304-4042-46e9-8641-963815f5659c" containerID="3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef" exitCode=0 Feb 21 09:11:27 crc kubenswrapper[4820]: I0221 09:11:27.619103 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c67xc" event={"ID":"18749304-4042-46e9-8641-963815f5659c","Type":"ContainerDied","Data":"3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef"} Feb 21 09:11:27 crc kubenswrapper[4820]: I0221 09:11:27.619445 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c67xc" event={"ID":"18749304-4042-46e9-8641-963815f5659c","Type":"ContainerStarted","Data":"0f24824b62c5dcb7203393a1912dfcae426de62ec3168651c566197ab53cbe13"} Feb 21 09:11:28 crc kubenswrapper[4820]: I0221 09:11:28.630009 4820 generic.go:334] "Generic (PLEG): container finished" podID="18749304-4042-46e9-8641-963815f5659c" containerID="77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7" exitCode=0 Feb 21 09:11:28 crc kubenswrapper[4820]: I0221 09:11:28.630057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c67xc" event={"ID":"18749304-4042-46e9-8641-963815f5659c","Type":"ContainerDied","Data":"77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7"} Feb 21 09:11:29 crc kubenswrapper[4820]: I0221 09:11:29.643368 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c67xc" event={"ID":"18749304-4042-46e9-8641-963815f5659c","Type":"ContainerStarted","Data":"19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f"} Feb 21 09:11:29 crc kubenswrapper[4820]: I0221 09:11:29.686373 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c67xc" podStartSLOduration=3.299600461 podStartE2EDuration="4.686344373s" podCreationTimestamp="2026-02-21 09:11:25 +0000 UTC" firstStartedPulling="2026-02-21 09:11:27.621617144 +0000 UTC m=+8662.654701342" lastFinishedPulling="2026-02-21 09:11:29.008361046 +0000 UTC m=+8664.041445254" observedRunningTime="2026-02-21 09:11:29.672026204 +0000 UTC m=+8664.705110412" watchObservedRunningTime="2026-02-21 09:11:29.686344373 +0000 UTC m=+8664.719428591" Feb 21 09:11:36 crc kubenswrapper[4820]: I0221 09:11:36.159414 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:36 crc kubenswrapper[4820]: I0221 09:11:36.160316 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:36 crc kubenswrapper[4820]: I0221 09:11:36.205008 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:36 crc kubenswrapper[4820]: I0221 09:11:36.762736 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:36 crc kubenswrapper[4820]: I0221 09:11:36.816554 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c67xc"] Feb 21 09:11:38 crc kubenswrapper[4820]: I0221 09:11:38.730145 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c67xc" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="registry-server" containerID="cri-o://19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f" gracePeriod=2 Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.299079 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.365021 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-utilities\") pod \"18749304-4042-46e9-8641-963815f5659c\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.365145 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-catalog-content\") pod \"18749304-4042-46e9-8641-963815f5659c\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.365174 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hwjs\" (UniqueName: \"kubernetes.io/projected/18749304-4042-46e9-8641-963815f5659c-kube-api-access-2hwjs\") pod \"18749304-4042-46e9-8641-963815f5659c\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.366308 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-utilities" (OuterVolumeSpecName: "utilities") pod "18749304-4042-46e9-8641-963815f5659c" (UID: "18749304-4042-46e9-8641-963815f5659c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.366912 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.372597 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18749304-4042-46e9-8641-963815f5659c-kube-api-access-2hwjs" (OuterVolumeSpecName: "kube-api-access-2hwjs") pod "18749304-4042-46e9-8641-963815f5659c" (UID: "18749304-4042-46e9-8641-963815f5659c"). InnerVolumeSpecName "kube-api-access-2hwjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.395804 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18749304-4042-46e9-8641-963815f5659c" (UID: "18749304-4042-46e9-8641-963815f5659c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.469622 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.469667 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hwjs\" (UniqueName: \"kubernetes.io/projected/18749304-4042-46e9-8641-963815f5659c-kube-api-access-2hwjs\") on node \"crc\" DevicePath \"\"" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.740368 4820 generic.go:334] "Generic (PLEG): container finished" podID="18749304-4042-46e9-8641-963815f5659c" containerID="19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f" exitCode=0 Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.740410 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c67xc" event={"ID":"18749304-4042-46e9-8641-963815f5659c","Type":"ContainerDied","Data":"19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f"} Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.740446 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c67xc" event={"ID":"18749304-4042-46e9-8641-963815f5659c","Type":"ContainerDied","Data":"0f24824b62c5dcb7203393a1912dfcae426de62ec3168651c566197ab53cbe13"} Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.740463 4820 scope.go:117] "RemoveContainer" containerID="19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.741041 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.778253 4820 scope.go:117] "RemoveContainer" containerID="77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.787342 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c67xc"] Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.798528 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c67xc"] Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.802848 4820 scope.go:117] "RemoveContainer" containerID="3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.853357 4820 scope.go:117] "RemoveContainer" containerID="19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f" Feb 21 09:11:39 crc kubenswrapper[4820]: E0221 09:11:39.858718 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f\": container with ID starting with 19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f not found: ID does not exist" containerID="19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.858781 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f"} err="failed to get container status \"19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f\": rpc error: code = NotFound desc = could not find container \"19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f\": container with ID starting with 19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f not found: ID does not exist" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.858814 4820 scope.go:117] "RemoveContainer" containerID="77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7" Feb 21 09:11:39 crc kubenswrapper[4820]: E0221 09:11:39.859386 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7\": container with ID starting with 77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7 not found: ID does not exist" containerID="77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.859461 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7"} err="failed to get container status \"77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7\": rpc error: code = NotFound desc = could not find container \"77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7\": container with ID starting with 77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7 not found: ID does not exist" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.859489 4820 scope.go:117] "RemoveContainer" containerID="3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef" Feb 21 09:11:39 crc kubenswrapper[4820]: E0221 09:11:39.865732 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef\": container with ID starting with 3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef not found: ID does not exist" containerID="3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.865971 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef"} err="failed to get container status \"3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef\": rpc error: code = NotFound desc = could not find container \"3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef\": container with ID starting with 3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef not found: ID does not exist" Feb 21 09:11:41 crc kubenswrapper[4820]: I0221 09:11:41.711738 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18749304-4042-46e9-8641-963815f5659c" path="/var/lib/kubelet/pods/18749304-4042-46e9-8641-963815f5659c/volumes" Feb 21 09:11:43 crc kubenswrapper[4820]: I0221 09:11:43.816403 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:11:43 crc kubenswrapper[4820]: I0221 09:11:43.817361 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:11:43 crc kubenswrapper[4820]: I0221 09:11:43.817437 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 09:11:43 crc kubenswrapper[4820]: I0221 09:11:43.818325 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00f2ad976a303bdff7905db0927834eff5c6d0654e866e9f66c57afad544d05b"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 09:11:43 crc kubenswrapper[4820]: I0221 09:11:43.818388 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://00f2ad976a303bdff7905db0927834eff5c6d0654e866e9f66c57afad544d05b" gracePeriod=600 Feb 21 09:11:44 crc kubenswrapper[4820]: I0221 09:11:44.790446 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="00f2ad976a303bdff7905db0927834eff5c6d0654e866e9f66c57afad544d05b" exitCode=0 Feb 21 09:11:44 crc kubenswrapper[4820]: I0221 09:11:44.790558 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"00f2ad976a303bdff7905db0927834eff5c6d0654e866e9f66c57afad544d05b"} Feb 21 09:11:44 crc kubenswrapper[4820]: I0221 09:11:44.790814 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab"} Feb 21 09:11:44 crc kubenswrapper[4820]: I0221 09:11:44.790835 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:12:04 crc kubenswrapper[4820]: I0221 09:12:04.111791 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Feb 21 09:12:04 crc kubenswrapper[4820]: I0221 09:12:04.113658 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="f51c53b3-c766-40db-ad65-5935f9fb3ee4" containerName="adoption" containerID="cri-o://9265ce156d963015ed9d0dc964122ef5cf17eb7532d8a20b6b597df27cc4af49" gracePeriod=30 Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.294678 4820 generic.go:334] "Generic (PLEG): container finished" podID="f51c53b3-c766-40db-ad65-5935f9fb3ee4" containerID="9265ce156d963015ed9d0dc964122ef5cf17eb7532d8a20b6b597df27cc4af49" exitCode=137 Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.294772 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"f51c53b3-c766-40db-ad65-5935f9fb3ee4","Type":"ContainerDied","Data":"9265ce156d963015ed9d0dc964122ef5cf17eb7532d8a20b6b597df27cc4af49"} Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.658810 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.725377 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\") pod \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.725896 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gczmn\" (UniqueName: \"kubernetes.io/projected/f51c53b3-c766-40db-ad65-5935f9fb3ee4-kube-api-access-gczmn\") pod \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.732559 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51c53b3-c766-40db-ad65-5935f9fb3ee4-kube-api-access-gczmn" (OuterVolumeSpecName: "kube-api-access-gczmn") pod "f51c53b3-c766-40db-ad65-5935f9fb3ee4" (UID: "f51c53b3-c766-40db-ad65-5935f9fb3ee4"). InnerVolumeSpecName "kube-api-access-gczmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.747513 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0" (OuterVolumeSpecName: "mariadb-data") pod "f51c53b3-c766-40db-ad65-5935f9fb3ee4" (UID: "f51c53b3-c766-40db-ad65-5935f9fb3ee4"). InnerVolumeSpecName "pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.829804 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gczmn\" (UniqueName: \"kubernetes.io/projected/f51c53b3-c766-40db-ad65-5935f9fb3ee4-kube-api-access-gczmn\") on node \"crc\" DevicePath \"\"" Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.829864 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\") on node \"crc\" " Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.869924 4820 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.870775 4820 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0") on node "crc" Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.931905 4820 reconciler_common.go:293] "Volume detached for volume \"pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\") on node \"crc\" DevicePath \"\"" Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.307049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"f51c53b3-c766-40db-ad65-5935f9fb3ee4","Type":"ContainerDied","Data":"707475d0c6275ed4702ec4fee55d65d5c005c4843fb7b9c91608c48f928cd4c6"} Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.307116 4820 scope.go:117] "RemoveContainer" containerID="9265ce156d963015ed9d0dc964122ef5cf17eb7532d8a20b6b597df27cc4af49" Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.307131 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.351321 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.364420 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.711190 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51c53b3-c766-40db-ad65-5935f9fb3ee4" path="/var/lib/kubelet/pods/f51c53b3-c766-40db-ad65-5935f9fb3ee4/volumes" Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.917136 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.917620 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="0aeb2e3c-2741-4cfb-ae99-d7f696b69490" containerName="adoption" containerID="cri-o://7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8" gracePeriod=30 Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.479066 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.606248 4820 generic.go:334] "Generic (PLEG): container finished" podID="0aeb2e3c-2741-4cfb-ae99-d7f696b69490" containerID="7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8" exitCode=137 Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.606324 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.606322 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"0aeb2e3c-2741-4cfb-ae99-d7f696b69490","Type":"ContainerDied","Data":"7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8"} Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.607065 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"0aeb2e3c-2741-4cfb-ae99-d7f696b69490","Type":"ContainerDied","Data":"8dd551c3890db1e73ddd2531407ed1073b385c0ce262dc89304db8e225ef25b4"} Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.607126 4820 scope.go:117] "RemoveContainer" containerID="7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.612489 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gxxk\" (UniqueName: \"kubernetes.io/projected/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-kube-api-access-7gxxk\") pod \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.612683 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-ovn-data-cert\") pod \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.613424 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\") pod \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.619807 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-kube-api-access-7gxxk" (OuterVolumeSpecName: "kube-api-access-7gxxk") pod "0aeb2e3c-2741-4cfb-ae99-d7f696b69490" (UID: "0aeb2e3c-2741-4cfb-ae99-d7f696b69490"). InnerVolumeSpecName "kube-api-access-7gxxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.622051 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "0aeb2e3c-2741-4cfb-ae99-d7f696b69490" (UID: "0aeb2e3c-2741-4cfb-ae99-d7f696b69490"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.640263 4820 scope.go:117] "RemoveContainer" containerID="7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8" Feb 21 09:13:06 crc kubenswrapper[4820]: E0221 09:13:06.640931 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8\": container with ID starting with 7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8 not found: ID does not exist" containerID="7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.641003 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8"} err="failed to get container status \"7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8\": rpc error: code = NotFound desc = could not find container \"7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8\": container with ID starting with 7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8 not found: ID does not exist" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.657212 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452" (OuterVolumeSpecName: "ovn-data") pod "0aeb2e3c-2741-4cfb-ae99-d7f696b69490" (UID: "0aeb2e3c-2741-4cfb-ae99-d7f696b69490"). InnerVolumeSpecName "pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.715919 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gxxk\" (UniqueName: \"kubernetes.io/projected/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-kube-api-access-7gxxk\") on node \"crc\" DevicePath \"\"" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.715962 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.716013 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\") on node \"crc\" " Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.741401 4820 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.741551 4820 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452") on node "crc" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.818399 4820 reconciler_common.go:293] "Volume detached for volume \"pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\") on node \"crc\" DevicePath \"\"" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.943603 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.954292 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Feb 21 09:13:07 crc kubenswrapper[4820]: I0221 09:13:07.718053 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aeb2e3c-2741-4cfb-ae99-d7f696b69490" path="/var/lib/kubelet/pods/0aeb2e3c-2741-4cfb-ae99-d7f696b69490/volumes" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.935156 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 21 09:13:25 crc kubenswrapper[4820]: E0221 09:13:25.936339 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="registry-server" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936355 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="registry-server" Feb 21 09:13:25 crc kubenswrapper[4820]: E0221 09:13:25.936382 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="extract-content" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936388 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="extract-content" Feb 21 09:13:25 crc kubenswrapper[4820]: E0221 09:13:25.936397 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51c53b3-c766-40db-ad65-5935f9fb3ee4" containerName="adoption" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936403 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51c53b3-c766-40db-ad65-5935f9fb3ee4" containerName="adoption" Feb 21 09:13:25 crc kubenswrapper[4820]: E0221 09:13:25.936421 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aeb2e3c-2741-4cfb-ae99-d7f696b69490" containerName="adoption" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936426 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aeb2e3c-2741-4cfb-ae99-d7f696b69490" containerName="adoption" Feb 21 09:13:25 crc kubenswrapper[4820]: E0221 09:13:25.936442 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="extract-utilities" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936447 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="extract-utilities" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936632 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aeb2e3c-2741-4cfb-ae99-d7f696b69490" containerName="adoption" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936646 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51c53b3-c766-40db-ad65-5935f9fb3ee4" containerName="adoption" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936657 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="registry-server" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.937458 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.941005 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.941216 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.941572 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ccs7x" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.942338 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.948847 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053345 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-config-data\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053377 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053406 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwptg\" (UniqueName: \"kubernetes.io/projected/417782d7-a42e-4872-9e2d-0f11848812cd-kube-api-access-jwptg\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053510 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053562 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053605 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053631 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053665 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053864 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.156760 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-config-data\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.156844 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.156887 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwptg\" (UniqueName: \"kubernetes.io/projected/417782d7-a42e-4872-9e2d-0f11848812cd-kube-api-access-jwptg\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.156991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.157036 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.157073 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.157109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.157142 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.157199 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.157841 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.157900 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.158043 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.158895 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-config-data\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.158903 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.165921 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.166053 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.166334 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.178336 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwptg\" (UniqueName: \"kubernetes.io/projected/417782d7-a42e-4872-9e2d-0f11848812cd-kube-api-access-jwptg\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.190898 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.258135 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.742101 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.821397 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"417782d7-a42e-4872-9e2d-0f11848812cd","Type":"ContainerStarted","Data":"7e5ff066704f3e33a6c2fdd8d04c5c80690a10bd358cca7bf49443c234af864d"} Feb 21 09:14:12 crc kubenswrapper[4820]: E0221 09:14:12.270560 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8419493e1fd846703d277695e03fc5eb" Feb 21 09:14:12 crc kubenswrapper[4820]: E0221 09:14:12.271105 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8419493e1fd846703d277695e03fc5eb" Feb 21 09:14:12 crc kubenswrapper[4820]: E0221 09:14:12.271286 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8419493e1fd846703d277695e03fc5eb,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwptg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(417782d7-a42e-4872-9e2d-0f11848812cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 09:14:12 crc kubenswrapper[4820]: E0221 09:14:12.272463 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="417782d7-a42e-4872-9e2d-0f11848812cd" Feb 21 09:14:12 crc kubenswrapper[4820]: E0221 09:14:12.468984 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8419493e1fd846703d277695e03fc5eb\\\"\"" pod="openstack/tempest-tests-tempest" podUID="417782d7-a42e-4872-9e2d-0f11848812cd" Feb 21 09:14:13 crc kubenswrapper[4820]: I0221 09:14:13.816830 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:14:13 crc kubenswrapper[4820]: I0221 09:14:13.816893 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:14:24 crc kubenswrapper[4820]: I0221 09:14:24.898001 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 21 09:14:26 crc kubenswrapper[4820]: I0221 09:14:26.589651 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"417782d7-a42e-4872-9e2d-0f11848812cd","Type":"ContainerStarted","Data":"ab6120679bb44c551ad880ba2cc6a7b2086118bf8f465825a332cb5176e9c344"} Feb 21 09:14:26 crc kubenswrapper[4820]: I0221 09:14:26.613608 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.470947165 podStartE2EDuration="1m2.613583661s" podCreationTimestamp="2026-02-21 09:13:24 +0000 UTC" firstStartedPulling="2026-02-21 09:13:26.752105331 +0000 UTC m=+8781.785189529" lastFinishedPulling="2026-02-21 09:14:24.894741827 +0000 UTC m=+8839.927826025" observedRunningTime="2026-02-21 09:14:26.606760555 +0000 UTC m=+8841.639844763" watchObservedRunningTime="2026-02-21 09:14:26.613583661 +0000 UTC m=+8841.646667859" Feb 21 09:14:43 crc kubenswrapper[4820]: I0221 09:14:43.817066 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:14:43 crc kubenswrapper[4820]: I0221 09:14:43.818298 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.153646 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm"] Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.156076 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.158628 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.158667 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.162832 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm"] Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.300556 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcmk9\" (UniqueName: \"kubernetes.io/projected/abc2aafa-e2b9-427e-83a1-e9da552ad85e-kube-api-access-hcmk9\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.300702 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/abc2aafa-e2b9-427e-83a1-e9da552ad85e-secret-volume\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.300848 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abc2aafa-e2b9-427e-83a1-e9da552ad85e-config-volume\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.403419 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcmk9\" (UniqueName: \"kubernetes.io/projected/abc2aafa-e2b9-427e-83a1-e9da552ad85e-kube-api-access-hcmk9\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.403527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/abc2aafa-e2b9-427e-83a1-e9da552ad85e-secret-volume\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.403579 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abc2aafa-e2b9-427e-83a1-e9da552ad85e-config-volume\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.407745 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abc2aafa-e2b9-427e-83a1-e9da552ad85e-config-volume\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.417789 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/abc2aafa-e2b9-427e-83a1-e9da552ad85e-secret-volume\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.420363 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcmk9\" (UniqueName: \"kubernetes.io/projected/abc2aafa-e2b9-427e-83a1-e9da552ad85e-kube-api-access-hcmk9\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.482949 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:01 crc kubenswrapper[4820]: I0221 09:15:01.012255 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm"] Feb 21 09:15:01 crc kubenswrapper[4820]: W0221 09:15:01.017088 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc2aafa_e2b9_427e_83a1_e9da552ad85e.slice/crio-ab9a893a83bb61314a71849ee0aba7881cdcaf98262e7d2383396f6f36d74cc4 WatchSource:0}: Error finding container ab9a893a83bb61314a71849ee0aba7881cdcaf98262e7d2383396f6f36d74cc4: Status 404 returned error can't find the container with id ab9a893a83bb61314a71849ee0aba7881cdcaf98262e7d2383396f6f36d74cc4 Feb 21 09:15:01 crc kubenswrapper[4820]: E0221 09:15:01.943327 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc2aafa_e2b9_427e_83a1_e9da552ad85e.slice/crio-conmon-d03a067a33606ca7b05b2c6b5da768e29afb4b085d65b4120c44864979a9f56e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc2aafa_e2b9_427e_83a1_e9da552ad85e.slice/crio-d03a067a33606ca7b05b2c6b5da768e29afb4b085d65b4120c44864979a9f56e.scope\": RecentStats: unable to find data in memory cache]" Feb 21 09:15:01 crc kubenswrapper[4820]: I0221 09:15:01.974900 4820 generic.go:334] "Generic (PLEG): container finished" podID="abc2aafa-e2b9-427e-83a1-e9da552ad85e" containerID="d03a067a33606ca7b05b2c6b5da768e29afb4b085d65b4120c44864979a9f56e" exitCode=0 Feb 21 09:15:01 crc kubenswrapper[4820]: I0221 09:15:01.974942 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" event={"ID":"abc2aafa-e2b9-427e-83a1-e9da552ad85e","Type":"ContainerDied","Data":"d03a067a33606ca7b05b2c6b5da768e29afb4b085d65b4120c44864979a9f56e"} Feb 21 09:15:01 crc kubenswrapper[4820]: I0221 09:15:01.974968 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" event={"ID":"abc2aafa-e2b9-427e-83a1-e9da552ad85e","Type":"ContainerStarted","Data":"ab9a893a83bb61314a71849ee0aba7881cdcaf98262e7d2383396f6f36d74cc4"} Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.422451 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.569747 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abc2aafa-e2b9-427e-83a1-e9da552ad85e-config-volume\") pod \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.569979 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcmk9\" (UniqueName: \"kubernetes.io/projected/abc2aafa-e2b9-427e-83a1-e9da552ad85e-kube-api-access-hcmk9\") pod \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.570102 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/abc2aafa-e2b9-427e-83a1-e9da552ad85e-secret-volume\") pod \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.570562 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc2aafa-e2b9-427e-83a1-e9da552ad85e-config-volume" (OuterVolumeSpecName: "config-volume") pod "abc2aafa-e2b9-427e-83a1-e9da552ad85e" (UID: "abc2aafa-e2b9-427e-83a1-e9da552ad85e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.571880 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abc2aafa-e2b9-427e-83a1-e9da552ad85e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.576677 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc2aafa-e2b9-427e-83a1-e9da552ad85e-kube-api-access-hcmk9" (OuterVolumeSpecName: "kube-api-access-hcmk9") pod "abc2aafa-e2b9-427e-83a1-e9da552ad85e" (UID: "abc2aafa-e2b9-427e-83a1-e9da552ad85e"). InnerVolumeSpecName "kube-api-access-hcmk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.577353 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc2aafa-e2b9-427e-83a1-e9da552ad85e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "abc2aafa-e2b9-427e-83a1-e9da552ad85e" (UID: "abc2aafa-e2b9-427e-83a1-e9da552ad85e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.674077 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcmk9\" (UniqueName: \"kubernetes.io/projected/abc2aafa-e2b9-427e-83a1-e9da552ad85e-kube-api-access-hcmk9\") on node \"crc\" DevicePath \"\"" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.674147 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/abc2aafa-e2b9-427e-83a1-e9da552ad85e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.991896 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" event={"ID":"abc2aafa-e2b9-427e-83a1-e9da552ad85e","Type":"ContainerDied","Data":"ab9a893a83bb61314a71849ee0aba7881cdcaf98262e7d2383396f6f36d74cc4"} Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.991936 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab9a893a83bb61314a71849ee0aba7881cdcaf98262e7d2383396f6f36d74cc4" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.991997 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:04 crc kubenswrapper[4820]: I0221 09:15:04.495882 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw"] Feb 21 09:15:04 crc kubenswrapper[4820]: I0221 09:15:04.507436 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw"] Feb 21 09:15:05 crc kubenswrapper[4820]: I0221 09:15:05.725439 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7930cbc-54a2-4fed-8153-27bb0a44221d" path="/var/lib/kubelet/pods/b7930cbc-54a2-4fed-8153-27bb0a44221d/volumes" Feb 21 09:15:13 crc kubenswrapper[4820]: I0221 09:15:13.816644 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:15:13 crc kubenswrapper[4820]: I0221 09:15:13.817261 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:15:13 crc kubenswrapper[4820]: I0221 09:15:13.817307 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 09:15:13 crc kubenswrapper[4820]: I0221 09:15:13.818059 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 09:15:13 crc kubenswrapper[4820]: I0221 09:15:13.818108 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" gracePeriod=600 Feb 21 09:15:13 crc kubenswrapper[4820]: E0221 09:15:13.939649 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:15:14 crc kubenswrapper[4820]: I0221 09:15:14.086259 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" exitCode=0 Feb 21 09:15:14 crc kubenswrapper[4820]: I0221 09:15:14.086309 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab"} Feb 21 09:15:14 crc kubenswrapper[4820]: I0221 09:15:14.086346 4820 scope.go:117] "RemoveContainer" containerID="00f2ad976a303bdff7905db0927834eff5c6d0654e866e9f66c57afad544d05b" Feb 21 09:15:14 crc kubenswrapper[4820]: I0221 09:15:14.087017 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:15:14 crc kubenswrapper[4820]: E0221 09:15:14.087319 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:15:25 crc kubenswrapper[4820]: I0221 09:15:25.706060 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:15:25 crc kubenswrapper[4820]: E0221 09:15:25.707009 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:15:36 crc kubenswrapper[4820]: I0221 09:15:36.696444 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:15:36 crc kubenswrapper[4820]: E0221 09:15:36.697143 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:15:37 crc kubenswrapper[4820]: I0221 09:15:37.686893 4820 scope.go:117] "RemoveContainer" containerID="bedec9e828a462a9d7d9e96d01cf5a9452a72b80e424a2bc7656e332167d5caf" Feb 21 09:15:48 crc kubenswrapper[4820]: I0221 09:15:48.696792 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:15:48 crc kubenswrapper[4820]: E0221 09:15:48.697612 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:16:02 crc kubenswrapper[4820]: I0221 09:16:02.696914 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:16:02 crc kubenswrapper[4820]: E0221 09:16:02.697949 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.099873 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2twvm"] Feb 21 09:16:04 crc kubenswrapper[4820]: E0221 09:16:04.100686 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc2aafa-e2b9-427e-83a1-e9da552ad85e" containerName="collect-profiles" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.100700 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc2aafa-e2b9-427e-83a1-e9da552ad85e" containerName="collect-profiles" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.100930 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="abc2aafa-e2b9-427e-83a1-e9da552ad85e" containerName="collect-profiles" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.102357 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.118463 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2twvm"] Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.222029 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-catalog-content\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.222353 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-utilities\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.222601 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzhdp\" (UniqueName: \"kubernetes.io/projected/cca25b39-a0d0-4ca2-9000-9f888a196bab-kube-api-access-hzhdp\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.324480 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-utilities\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.324608 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzhdp\" (UniqueName: \"kubernetes.io/projected/cca25b39-a0d0-4ca2-9000-9f888a196bab-kube-api-access-hzhdp\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.324712 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-catalog-content\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.325000 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-utilities\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.325273 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-catalog-content\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.696263 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzhdp\" (UniqueName: \"kubernetes.io/projected/cca25b39-a0d0-4ca2-9000-9f888a196bab-kube-api-access-hzhdp\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.731334 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:05 crc kubenswrapper[4820]: I0221 09:16:05.251725 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2twvm"] Feb 21 09:16:05 crc kubenswrapper[4820]: I0221 09:16:05.548152 4820 generic.go:334] "Generic (PLEG): container finished" podID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerID="375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a" exitCode=0 Feb 21 09:16:05 crc kubenswrapper[4820]: I0221 09:16:05.548216 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2twvm" event={"ID":"cca25b39-a0d0-4ca2-9000-9f888a196bab","Type":"ContainerDied","Data":"375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a"} Feb 21 09:16:05 crc kubenswrapper[4820]: I0221 09:16:05.548799 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2twvm" event={"ID":"cca25b39-a0d0-4ca2-9000-9f888a196bab","Type":"ContainerStarted","Data":"5cb259e68b26b83bfc3d59361ebc5f021da97157c359d998893420becfd9ab1b"} Feb 21 09:16:05 crc kubenswrapper[4820]: I0221 09:16:05.549859 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 09:16:06 crc kubenswrapper[4820]: I0221 09:16:06.562987 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2twvm" event={"ID":"cca25b39-a0d0-4ca2-9000-9f888a196bab","Type":"ContainerStarted","Data":"a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635"} Feb 21 09:16:12 crc kubenswrapper[4820]: I0221 09:16:12.613185 4820 generic.go:334] "Generic (PLEG): container finished" podID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerID="a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635" exitCode=0 Feb 21 09:16:12 crc kubenswrapper[4820]: I0221 09:16:12.613288 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2twvm" event={"ID":"cca25b39-a0d0-4ca2-9000-9f888a196bab","Type":"ContainerDied","Data":"a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635"} Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.347734 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q5l92"] Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.353326 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.361655 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q5l92"] Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.518512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-catalog-content\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.518664 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-utilities\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.518762 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2cnl\" (UniqueName: \"kubernetes.io/projected/4e3fb2aa-800a-409e-b230-cb71f1276c7b-kube-api-access-w2cnl\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.622562 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-catalog-content\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.622668 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-utilities\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.622779 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2cnl\" (UniqueName: \"kubernetes.io/projected/4e3fb2aa-800a-409e-b230-cb71f1276c7b-kube-api-access-w2cnl\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.623698 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-catalog-content\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.623787 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-utilities\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.654998 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2cnl\" (UniqueName: \"kubernetes.io/projected/4e3fb2aa-800a-409e-b230-cb71f1276c7b-kube-api-access-w2cnl\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.688072 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:14 crc kubenswrapper[4820]: I0221 09:16:14.642854 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2twvm" event={"ID":"cca25b39-a0d0-4ca2-9000-9f888a196bab","Type":"ContainerStarted","Data":"fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357"} Feb 21 09:16:14 crc kubenswrapper[4820]: I0221 09:16:14.682019 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2twvm" podStartSLOduration=3.148724132 podStartE2EDuration="10.681997098s" podCreationTimestamp="2026-02-21 09:16:04 +0000 UTC" firstStartedPulling="2026-02-21 09:16:05.549621855 +0000 UTC m=+8940.582706053" lastFinishedPulling="2026-02-21 09:16:13.082894821 +0000 UTC m=+8948.115979019" observedRunningTime="2026-02-21 09:16:14.671694437 +0000 UTC m=+8949.704778635" watchObservedRunningTime="2026-02-21 09:16:14.681997098 +0000 UTC m=+8949.715081316" Feb 21 09:16:14 crc kubenswrapper[4820]: I0221 09:16:14.731980 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:14 crc kubenswrapper[4820]: I0221 09:16:14.732022 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:14 crc kubenswrapper[4820]: I0221 09:16:14.888751 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q5l92"] Feb 21 09:16:15 crc kubenswrapper[4820]: I0221 09:16:15.652068 4820 generic.go:334] "Generic (PLEG): container finished" podID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerID="dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c" exitCode=0 Feb 21 09:16:15 crc kubenswrapper[4820]: I0221 09:16:15.652157 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5l92" event={"ID":"4e3fb2aa-800a-409e-b230-cb71f1276c7b","Type":"ContainerDied","Data":"dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c"} Feb 21 09:16:15 crc kubenswrapper[4820]: I0221 09:16:15.652426 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5l92" event={"ID":"4e3fb2aa-800a-409e-b230-cb71f1276c7b","Type":"ContainerStarted","Data":"5a45eccaabcee7d31231def09c3616e013b3c8ce6ec34ca7e3f63210dcbce64e"} Feb 21 09:16:15 crc kubenswrapper[4820]: I0221 09:16:15.789179 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2twvm" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" probeResult="failure" output=< Feb 21 09:16:15 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:16:15 crc kubenswrapper[4820]: > Feb 21 09:16:17 crc kubenswrapper[4820]: I0221 09:16:17.682580 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5l92" event={"ID":"4e3fb2aa-800a-409e-b230-cb71f1276c7b","Type":"ContainerStarted","Data":"8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35"} Feb 21 09:16:17 crc kubenswrapper[4820]: I0221 09:16:17.697773 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:16:17 crc kubenswrapper[4820]: E0221 09:16:17.698091 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:16:24 crc kubenswrapper[4820]: I0221 09:16:24.136466 4820 generic.go:334] "Generic (PLEG): container finished" podID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerID="8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35" exitCode=0 Feb 21 09:16:24 crc kubenswrapper[4820]: I0221 09:16:24.136540 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5l92" event={"ID":"4e3fb2aa-800a-409e-b230-cb71f1276c7b","Type":"ContainerDied","Data":"8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35"} Feb 21 09:16:25 crc kubenswrapper[4820]: I0221 09:16:25.148646 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5l92" event={"ID":"4e3fb2aa-800a-409e-b230-cb71f1276c7b","Type":"ContainerStarted","Data":"a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af"} Feb 21 09:16:25 crc kubenswrapper[4820]: I0221 09:16:25.175364 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q5l92" podStartSLOduration=3.284537698 podStartE2EDuration="12.175343222s" podCreationTimestamp="2026-02-21 09:16:13 +0000 UTC" firstStartedPulling="2026-02-21 09:16:15.655333337 +0000 UTC m=+8950.688417535" lastFinishedPulling="2026-02-21 09:16:24.546138851 +0000 UTC m=+8959.579223059" observedRunningTime="2026-02-21 09:16:25.167157949 +0000 UTC m=+8960.200242167" watchObservedRunningTime="2026-02-21 09:16:25.175343222 +0000 UTC m=+8960.208427420" Feb 21 09:16:25 crc kubenswrapper[4820]: I0221 09:16:25.793518 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2twvm" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" probeResult="failure" output=< Feb 21 09:16:25 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:16:25 crc kubenswrapper[4820]: > Feb 21 09:16:29 crc kubenswrapper[4820]: I0221 09:16:29.699583 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:16:29 crc kubenswrapper[4820]: E0221 09:16:29.699976 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:16:33 crc kubenswrapper[4820]: I0221 09:16:33.689591 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:33 crc kubenswrapper[4820]: I0221 09:16:33.690142 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:33 crc kubenswrapper[4820]: I0221 09:16:33.749140 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:34 crc kubenswrapper[4820]: I0221 09:16:34.348511 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:34 crc kubenswrapper[4820]: I0221 09:16:34.984461 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q5l92"] Feb 21 09:16:35 crc kubenswrapper[4820]: I0221 09:16:35.781529 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2twvm" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" probeResult="failure" output=< Feb 21 09:16:35 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:16:35 crc kubenswrapper[4820]: > Feb 21 09:16:36 crc kubenswrapper[4820]: I0221 09:16:36.247463 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q5l92" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="registry-server" containerID="cri-o://a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af" gracePeriod=2 Feb 21 09:16:36 crc kubenswrapper[4820]: I0221 09:16:36.939620 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.090113 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-utilities\") pod \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.090170 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-catalog-content\") pod \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.090233 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2cnl\" (UniqueName: \"kubernetes.io/projected/4e3fb2aa-800a-409e-b230-cb71f1276c7b-kube-api-access-w2cnl\") pod \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.091331 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-utilities" (OuterVolumeSpecName: "utilities") pod "4e3fb2aa-800a-409e-b230-cb71f1276c7b" (UID: "4e3fb2aa-800a-409e-b230-cb71f1276c7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.156928 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e3fb2aa-800a-409e-b230-cb71f1276c7b" (UID: "4e3fb2aa-800a-409e-b230-cb71f1276c7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.192839 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.192872 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.259117 4820 generic.go:334] "Generic (PLEG): container finished" podID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerID="a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af" exitCode=0 Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.259172 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5l92" event={"ID":"4e3fb2aa-800a-409e-b230-cb71f1276c7b","Type":"ContainerDied","Data":"a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af"} Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.259188 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.259211 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5l92" event={"ID":"4e3fb2aa-800a-409e-b230-cb71f1276c7b","Type":"ContainerDied","Data":"5a45eccaabcee7d31231def09c3616e013b3c8ce6ec34ca7e3f63210dcbce64e"} Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.259250 4820 scope.go:117] "RemoveContainer" containerID="a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.282167 4820 scope.go:117] "RemoveContainer" containerID="8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.591454 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3fb2aa-800a-409e-b230-cb71f1276c7b-kube-api-access-w2cnl" (OuterVolumeSpecName: "kube-api-access-w2cnl") pod "4e3fb2aa-800a-409e-b230-cb71f1276c7b" (UID: "4e3fb2aa-800a-409e-b230-cb71f1276c7b"). InnerVolumeSpecName "kube-api-access-w2cnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.601553 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2cnl\" (UniqueName: \"kubernetes.io/projected/4e3fb2aa-800a-409e-b230-cb71f1276c7b-kube-api-access-w2cnl\") on node \"crc\" DevicePath \"\"" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.607658 4820 scope.go:117] "RemoveContainer" containerID="dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.696061 4820 scope.go:117] "RemoveContainer" containerID="a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af" Feb 21 09:16:37 crc kubenswrapper[4820]: E0221 09:16:37.697420 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af\": container with ID starting with a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af not found: ID does not exist" containerID="a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.697457 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af"} err="failed to get container status \"a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af\": rpc error: code = NotFound desc = could not find container \"a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af\": container with ID starting with a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af not found: ID does not exist" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.697478 4820 scope.go:117] "RemoveContainer" containerID="8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35" Feb 21 09:16:37 crc kubenswrapper[4820]: E0221 09:16:37.697872 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35\": container with ID starting with 8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35 not found: ID does not exist" containerID="8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.697895 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35"} err="failed to get container status \"8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35\": rpc error: code = NotFound desc = could not find container \"8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35\": container with ID starting with 8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35 not found: ID does not exist" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.697908 4820 scope.go:117] "RemoveContainer" containerID="dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c" Feb 21 09:16:37 crc kubenswrapper[4820]: E0221 09:16:37.698455 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c\": container with ID starting with dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c not found: ID does not exist" containerID="dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.698509 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c"} err="failed to get container status \"dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c\": rpc error: code = NotFound desc = could not find container \"dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c\": container with ID starting with dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c not found: ID does not exist" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.887106 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q5l92"] Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.897470 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q5l92"] Feb 21 09:16:39 crc kubenswrapper[4820]: I0221 09:16:39.710532 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" path="/var/lib/kubelet/pods/4e3fb2aa-800a-409e-b230-cb71f1276c7b/volumes" Feb 21 09:16:43 crc kubenswrapper[4820]: I0221 09:16:43.697077 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:16:43 crc kubenswrapper[4820]: E0221 09:16:43.698079 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:16:45 crc kubenswrapper[4820]: I0221 09:16:45.776716 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2twvm" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" probeResult="failure" output=< Feb 21 09:16:45 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:16:45 crc kubenswrapper[4820]: > Feb 21 09:16:54 crc kubenswrapper[4820]: I0221 09:16:54.780616 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:54 crc kubenswrapper[4820]: I0221 09:16:54.832290 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:55 crc kubenswrapper[4820]: I0221 09:16:55.021286 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2twvm"] Feb 21 09:16:56 crc kubenswrapper[4820]: I0221 09:16:56.419529 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2twvm" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" containerID="cri-o://fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357" gracePeriod=2 Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.047760 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.209300 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-catalog-content\") pod \"cca25b39-a0d0-4ca2-9000-9f888a196bab\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.209445 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzhdp\" (UniqueName: \"kubernetes.io/projected/cca25b39-a0d0-4ca2-9000-9f888a196bab-kube-api-access-hzhdp\") pod \"cca25b39-a0d0-4ca2-9000-9f888a196bab\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.209617 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-utilities\") pod \"cca25b39-a0d0-4ca2-9000-9f888a196bab\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.210387 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-utilities" (OuterVolumeSpecName: "utilities") pod "cca25b39-a0d0-4ca2-9000-9f888a196bab" (UID: "cca25b39-a0d0-4ca2-9000-9f888a196bab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.214995 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cca25b39-a0d0-4ca2-9000-9f888a196bab-kube-api-access-hzhdp" (OuterVolumeSpecName: "kube-api-access-hzhdp") pod "cca25b39-a0d0-4ca2-9000-9f888a196bab" (UID: "cca25b39-a0d0-4ca2-9000-9f888a196bab"). InnerVolumeSpecName "kube-api-access-hzhdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.312300 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzhdp\" (UniqueName: \"kubernetes.io/projected/cca25b39-a0d0-4ca2-9000-9f888a196bab-kube-api-access-hzhdp\") on node \"crc\" DevicePath \"\"" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.312332 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.338790 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cca25b39-a0d0-4ca2-9000-9f888a196bab" (UID: "cca25b39-a0d0-4ca2-9000-9f888a196bab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.414091 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.428704 4820 generic.go:334] "Generic (PLEG): container finished" podID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerID="fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357" exitCode=0 Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.428745 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2twvm" event={"ID":"cca25b39-a0d0-4ca2-9000-9f888a196bab","Type":"ContainerDied","Data":"fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357"} Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.428771 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2twvm" event={"ID":"cca25b39-a0d0-4ca2-9000-9f888a196bab","Type":"ContainerDied","Data":"5cb259e68b26b83bfc3d59361ebc5f021da97157c359d998893420becfd9ab1b"} Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.428785 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.428788 4820 scope.go:117] "RemoveContainer" containerID="fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.456076 4820 scope.go:117] "RemoveContainer" containerID="a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.462868 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2twvm"] Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.471218 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2twvm"] Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.697389 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:16:57 crc kubenswrapper[4820]: E0221 09:16:57.697793 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.710222 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" path="/var/lib/kubelet/pods/cca25b39-a0d0-4ca2-9000-9f888a196bab/volumes" Feb 21 09:16:58 crc kubenswrapper[4820]: I0221 09:16:58.014907 4820 scope.go:117] "RemoveContainer" containerID="375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a" Feb 21 09:16:58 crc kubenswrapper[4820]: I0221 09:16:58.062010 4820 scope.go:117] "RemoveContainer" containerID="fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357" Feb 21 09:16:58 crc kubenswrapper[4820]: E0221 09:16:58.063539 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357\": container with ID starting with fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357 not found: ID does not exist" containerID="fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357" Feb 21 09:16:58 crc kubenswrapper[4820]: I0221 09:16:58.063593 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357"} err="failed to get container status \"fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357\": rpc error: code = NotFound desc = could not find container \"fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357\": container with ID starting with fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357 not found: ID does not exist" Feb 21 09:16:58 crc kubenswrapper[4820]: I0221 09:16:58.063623 4820 scope.go:117] "RemoveContainer" containerID="a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635" Feb 21 09:16:58 crc kubenswrapper[4820]: E0221 09:16:58.064228 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635\": container with ID starting with a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635 not found: ID does not exist" containerID="a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635" Feb 21 09:16:58 crc kubenswrapper[4820]: I0221 09:16:58.064278 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635"} err="failed to get container status \"a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635\": rpc error: code = NotFound desc = could not find container \"a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635\": container with ID starting with a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635 not found: ID does not exist" Feb 21 09:16:58 crc kubenswrapper[4820]: I0221 09:16:58.064299 4820 scope.go:117] "RemoveContainer" containerID="375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a" Feb 21 09:16:58 crc kubenswrapper[4820]: E0221 09:16:58.064548 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a\": container with ID starting with 375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a not found: ID does not exist" containerID="375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a" Feb 21 09:16:58 crc kubenswrapper[4820]: I0221 09:16:58.064583 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a"} err="failed to get container status \"375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a\": rpc error: code = NotFound desc = could not find container \"375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a\": container with ID starting with 375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a not found: ID does not exist" Feb 21 09:17:11 crc kubenswrapper[4820]: I0221 09:17:11.696133 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:17:11 crc kubenswrapper[4820]: E0221 09:17:11.697054 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:17:25 crc kubenswrapper[4820]: I0221 09:17:25.702566 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:17:25 crc kubenswrapper[4820]: E0221 09:17:25.703352 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:17:39 crc kubenswrapper[4820]: I0221 09:17:39.698230 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:17:39 crc kubenswrapper[4820]: E0221 09:17:39.699083 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:17:51 crc kubenswrapper[4820]: I0221 09:17:51.697396 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:17:51 crc kubenswrapper[4820]: E0221 09:17:51.698367 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:18:02 crc kubenswrapper[4820]: I0221 09:18:02.696717 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:18:02 crc kubenswrapper[4820]: E0221 09:18:02.697411 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:18:17 crc kubenswrapper[4820]: I0221 09:18:17.696859 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:18:17 crc kubenswrapper[4820]: E0221 09:18:17.697924 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:18:28 crc kubenswrapper[4820]: I0221 09:18:28.697191 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:18:28 crc kubenswrapper[4820]: E0221 09:18:28.698508 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:18:41 crc kubenswrapper[4820]: I0221 09:18:41.696560 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:18:41 crc kubenswrapper[4820]: E0221 09:18:41.697261 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:18:55 crc kubenswrapper[4820]: I0221 09:18:55.702805 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:18:55 crc kubenswrapper[4820]: E0221 09:18:55.703557 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:19:09 crc kubenswrapper[4820]: I0221 09:19:09.696815 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:19:09 crc kubenswrapper[4820]: E0221 09:19:09.697679 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:19:23 crc kubenswrapper[4820]: I0221 09:19:23.697330 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:19:23 crc kubenswrapper[4820]: E0221 09:19:23.698435 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:19:34 crc kubenswrapper[4820]: I0221 09:19:34.697895 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:19:34 crc kubenswrapper[4820]: E0221 09:19:34.698695 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:19:45 crc kubenswrapper[4820]: I0221 09:19:45.708655 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:19:45 crc kubenswrapper[4820]: E0221 09:19:45.710358 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:19:58 crc kubenswrapper[4820]: I0221 09:19:58.697271 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:19:58 crc kubenswrapper[4820]: E0221 09:19:58.698035 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:20:13 crc kubenswrapper[4820]: I0221 09:20:13.696934 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:20:13 crc kubenswrapper[4820]: E0221 09:20:13.697764 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:20:28 crc kubenswrapper[4820]: I0221 09:20:28.697532 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:20:29 crc kubenswrapper[4820]: I0221 09:20:29.533668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"1812e93a7fc1b49a1b4e92bb64b23e3ffb3d863faf7ace601e07e65ec966779b"} Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.056819 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-58zfg"] Feb 21 09:20:46 crc kubenswrapper[4820]: E0221 09:20:46.066843 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.066876 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" Feb 21 09:20:46 crc kubenswrapper[4820]: E0221 09:20:46.066917 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="extract-content" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.066927 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="extract-content" Feb 21 09:20:46 crc kubenswrapper[4820]: E0221 09:20:46.066963 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="extract-utilities" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.066973 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="extract-utilities" Feb 21 09:20:46 crc kubenswrapper[4820]: E0221 09:20:46.067016 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="extract-utilities" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.067026 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="extract-utilities" Feb 21 09:20:46 crc kubenswrapper[4820]: E0221 09:20:46.067157 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="extract-content" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.067168 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="extract-content" Feb 21 09:20:46 crc kubenswrapper[4820]: E0221 09:20:46.067196 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="registry-server" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.067205 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="registry-server" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.067777 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="registry-server" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.067826 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.088763 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58zfg"] Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.088776 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.170008 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-catalog-content\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.170053 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-utilities\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.170253 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnwnt\" (UniqueName: \"kubernetes.io/projected/e753b5e0-247c-41c3-b7b1-d0b10a067153-kube-api-access-bnwnt\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.271997 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnwnt\" (UniqueName: \"kubernetes.io/projected/e753b5e0-247c-41c3-b7b1-d0b10a067153-kube-api-access-bnwnt\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.272101 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-catalog-content\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.272123 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-utilities\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.273018 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-utilities\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.273094 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-catalog-content\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.698821 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnwnt\" (UniqueName: \"kubernetes.io/projected/e753b5e0-247c-41c3-b7b1-d0b10a067153-kube-api-access-bnwnt\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.719196 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:47 crc kubenswrapper[4820]: I0221 09:20:47.202522 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58zfg"] Feb 21 09:20:47 crc kubenswrapper[4820]: I0221 09:20:47.869036 4820 generic.go:334] "Generic (PLEG): container finished" podID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerID="cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b" exitCode=0 Feb 21 09:20:47 crc kubenswrapper[4820]: I0221 09:20:47.869292 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58zfg" event={"ID":"e753b5e0-247c-41c3-b7b1-d0b10a067153","Type":"ContainerDied","Data":"cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b"} Feb 21 09:20:47 crc kubenswrapper[4820]: I0221 09:20:47.869478 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58zfg" event={"ID":"e753b5e0-247c-41c3-b7b1-d0b10a067153","Type":"ContainerStarted","Data":"cc4e7f34da8593b16ce90f2bd7fafb03e884320aa86a31355dbc2e5db6b65df2"} Feb 21 09:20:48 crc kubenswrapper[4820]: I0221 09:20:48.879934 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58zfg" event={"ID":"e753b5e0-247c-41c3-b7b1-d0b10a067153","Type":"ContainerStarted","Data":"80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb"} Feb 21 09:20:50 crc kubenswrapper[4820]: I0221 09:20:50.904524 4820 generic.go:334] "Generic (PLEG): container finished" podID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerID="80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb" exitCode=0 Feb 21 09:20:50 crc kubenswrapper[4820]: I0221 09:20:50.904627 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58zfg" event={"ID":"e753b5e0-247c-41c3-b7b1-d0b10a067153","Type":"ContainerDied","Data":"80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb"} Feb 21 09:20:51 crc kubenswrapper[4820]: I0221 09:20:51.926252 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58zfg" event={"ID":"e753b5e0-247c-41c3-b7b1-d0b10a067153","Type":"ContainerStarted","Data":"f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5"} Feb 21 09:20:51 crc kubenswrapper[4820]: I0221 09:20:51.955531 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-58zfg" podStartSLOduration=2.552876756 podStartE2EDuration="5.955515014s" podCreationTimestamp="2026-02-21 09:20:46 +0000 UTC" firstStartedPulling="2026-02-21 09:20:47.871257142 +0000 UTC m=+9222.904341340" lastFinishedPulling="2026-02-21 09:20:51.2738954 +0000 UTC m=+9226.306979598" observedRunningTime="2026-02-21 09:20:51.952958884 +0000 UTC m=+9226.986043082" watchObservedRunningTime="2026-02-21 09:20:51.955515014 +0000 UTC m=+9226.988599212" Feb 21 09:20:56 crc kubenswrapper[4820]: I0221 09:20:56.720133 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:56 crc kubenswrapper[4820]: I0221 09:20:56.720720 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:56 crc kubenswrapper[4820]: I0221 09:20:56.767339 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:57 crc kubenswrapper[4820]: I0221 09:20:57.021949 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:57 crc kubenswrapper[4820]: I0221 09:20:57.079983 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58zfg"] Feb 21 09:20:58 crc kubenswrapper[4820]: I0221 09:20:58.982707 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-58zfg" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="registry-server" containerID="cri-o://f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5" gracePeriod=2 Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.656435 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.738381 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnwnt\" (UniqueName: \"kubernetes.io/projected/e753b5e0-247c-41c3-b7b1-d0b10a067153-kube-api-access-bnwnt\") pod \"e753b5e0-247c-41c3-b7b1-d0b10a067153\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.738669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-catalog-content\") pod \"e753b5e0-247c-41c3-b7b1-d0b10a067153\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.738743 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-utilities\") pod \"e753b5e0-247c-41c3-b7b1-d0b10a067153\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.740333 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-utilities" (OuterVolumeSpecName: "utilities") pod "e753b5e0-247c-41c3-b7b1-d0b10a067153" (UID: "e753b5e0-247c-41c3-b7b1-d0b10a067153"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.743864 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e753b5e0-247c-41c3-b7b1-d0b10a067153-kube-api-access-bnwnt" (OuterVolumeSpecName: "kube-api-access-bnwnt") pod "e753b5e0-247c-41c3-b7b1-d0b10a067153" (UID: "e753b5e0-247c-41c3-b7b1-d0b10a067153"). InnerVolumeSpecName "kube-api-access-bnwnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.795320 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e753b5e0-247c-41c3-b7b1-d0b10a067153" (UID: "e753b5e0-247c-41c3-b7b1-d0b10a067153"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.841120 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.841149 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.841159 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnwnt\" (UniqueName: \"kubernetes.io/projected/e753b5e0-247c-41c3-b7b1-d0b10a067153-kube-api-access-bnwnt\") on node \"crc\" DevicePath \"\"" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.995187 4820 generic.go:334] "Generic (PLEG): container finished" podID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerID="f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5" exitCode=0 Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.995254 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58zfg" event={"ID":"e753b5e0-247c-41c3-b7b1-d0b10a067153","Type":"ContainerDied","Data":"f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5"} Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.995282 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58zfg" event={"ID":"e753b5e0-247c-41c3-b7b1-d0b10a067153","Type":"ContainerDied","Data":"cc4e7f34da8593b16ce90f2bd7fafb03e884320aa86a31355dbc2e5db6b65df2"} Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.995298 4820 scope.go:117] "RemoveContainer" containerID="f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.995500 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.038029 4820 scope.go:117] "RemoveContainer" containerID="80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.046389 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58zfg"] Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.058511 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-58zfg"] Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.060224 4820 scope.go:117] "RemoveContainer" containerID="cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.119050 4820 scope.go:117] "RemoveContainer" containerID="f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5" Feb 21 09:21:00 crc kubenswrapper[4820]: E0221 09:21:00.119774 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5\": container with ID starting with f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5 not found: ID does not exist" containerID="f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.119813 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5"} err="failed to get container status \"f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5\": rpc error: code = NotFound desc = could not find container \"f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5\": container with ID starting with f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5 not found: ID does not exist" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.119838 4820 scope.go:117] "RemoveContainer" containerID="80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb" Feb 21 09:21:00 crc kubenswrapper[4820]: E0221 09:21:00.120284 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb\": container with ID starting with 80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb not found: ID does not exist" containerID="80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.120320 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb"} err="failed to get container status \"80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb\": rpc error: code = NotFound desc = could not find container \"80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb\": container with ID starting with 80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb not found: ID does not exist" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.120341 4820 scope.go:117] "RemoveContainer" containerID="cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b" Feb 21 09:21:00 crc kubenswrapper[4820]: E0221 09:21:00.120569 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b\": container with ID starting with cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b not found: ID does not exist" containerID="cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.120589 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b"} err="failed to get container status \"cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b\": rpc error: code = NotFound desc = could not find container \"cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b\": container with ID starting with cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b not found: ID does not exist" Feb 21 09:21:01 crc kubenswrapper[4820]: I0221 09:21:01.709996 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" path="/var/lib/kubelet/pods/e753b5e0-247c-41c3-b7b1-d0b10a067153/volumes" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.026780 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gb7ng"] Feb 21 09:21:42 crc kubenswrapper[4820]: E0221 09:21:42.027768 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="extract-utilities" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.027782 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="extract-utilities" Feb 21 09:21:42 crc kubenswrapper[4820]: E0221 09:21:42.027791 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="registry-server" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.027797 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="registry-server" Feb 21 09:21:42 crc kubenswrapper[4820]: E0221 09:21:42.027807 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="extract-content" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.027813 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="extract-content" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.028033 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="registry-server" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.029794 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.076874 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gb7ng"] Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.232336 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-catalog-content\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.233608 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-utilities\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.233917 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btn2c\" (UniqueName: \"kubernetes.io/projected/32781487-aa7d-4011-9901-2f3e852902fc-kube-api-access-btn2c\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.335147 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btn2c\" (UniqueName: \"kubernetes.io/projected/32781487-aa7d-4011-9901-2f3e852902fc-kube-api-access-btn2c\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.335201 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-catalog-content\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.335302 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-utilities\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.335741 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-utilities\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.335863 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-catalog-content\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.353259 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btn2c\" (UniqueName: \"kubernetes.io/projected/32781487-aa7d-4011-9901-2f3e852902fc-kube-api-access-btn2c\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.373402 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.917081 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gb7ng"] Feb 21 09:21:43 crc kubenswrapper[4820]: I0221 09:21:43.411082 4820 generic.go:334] "Generic (PLEG): container finished" podID="32781487-aa7d-4011-9901-2f3e852902fc" containerID="2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a" exitCode=0 Feb 21 09:21:43 crc kubenswrapper[4820]: I0221 09:21:43.411401 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gb7ng" event={"ID":"32781487-aa7d-4011-9901-2f3e852902fc","Type":"ContainerDied","Data":"2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a"} Feb 21 09:21:43 crc kubenswrapper[4820]: I0221 09:21:43.411429 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gb7ng" event={"ID":"32781487-aa7d-4011-9901-2f3e852902fc","Type":"ContainerStarted","Data":"c5b9b2036cb79cdd9ab9e450798a60fdb109f05ea61c9d939fe1251a5af2e168"} Feb 21 09:21:43 crc kubenswrapper[4820]: I0221 09:21:43.412817 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 09:21:44 crc kubenswrapper[4820]: I0221 09:21:44.422097 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gb7ng" event={"ID":"32781487-aa7d-4011-9901-2f3e852902fc","Type":"ContainerStarted","Data":"288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6"} Feb 21 09:21:45 crc kubenswrapper[4820]: I0221 09:21:45.434795 4820 generic.go:334] "Generic (PLEG): container finished" podID="32781487-aa7d-4011-9901-2f3e852902fc" containerID="288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6" exitCode=0 Feb 21 09:21:45 crc kubenswrapper[4820]: I0221 09:21:45.434908 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gb7ng" event={"ID":"32781487-aa7d-4011-9901-2f3e852902fc","Type":"ContainerDied","Data":"288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6"} Feb 21 09:21:47 crc kubenswrapper[4820]: I0221 09:21:47.461668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gb7ng" event={"ID":"32781487-aa7d-4011-9901-2f3e852902fc","Type":"ContainerStarted","Data":"204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209"} Feb 21 09:21:47 crc kubenswrapper[4820]: I0221 09:21:47.504705 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gb7ng" podStartSLOduration=3.077181143 podStartE2EDuration="5.504682673s" podCreationTimestamp="2026-02-21 09:21:42 +0000 UTC" firstStartedPulling="2026-02-21 09:21:43.412627718 +0000 UTC m=+9278.445711916" lastFinishedPulling="2026-02-21 09:21:45.840129248 +0000 UTC m=+9280.873213446" observedRunningTime="2026-02-21 09:21:47.492503211 +0000 UTC m=+9282.525587419" watchObservedRunningTime="2026-02-21 09:21:47.504682673 +0000 UTC m=+9282.537766871" Feb 21 09:21:52 crc kubenswrapper[4820]: I0221 09:21:52.373540 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:52 crc kubenswrapper[4820]: I0221 09:21:52.375587 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:52 crc kubenswrapper[4820]: I0221 09:21:52.437221 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:52 crc kubenswrapper[4820]: I0221 09:21:52.550842 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:52 crc kubenswrapper[4820]: I0221 09:21:52.683050 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gb7ng"] Feb 21 09:21:54 crc kubenswrapper[4820]: I0221 09:21:54.520356 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gb7ng" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="registry-server" containerID="cri-o://204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209" gracePeriod=2 Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.099565 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.197497 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-utilities\") pod \"32781487-aa7d-4011-9901-2f3e852902fc\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.197653 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btn2c\" (UniqueName: \"kubernetes.io/projected/32781487-aa7d-4011-9901-2f3e852902fc-kube-api-access-btn2c\") pod \"32781487-aa7d-4011-9901-2f3e852902fc\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.197680 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-catalog-content\") pod \"32781487-aa7d-4011-9901-2f3e852902fc\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.198362 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-utilities" (OuterVolumeSpecName: "utilities") pod "32781487-aa7d-4011-9901-2f3e852902fc" (UID: "32781487-aa7d-4011-9901-2f3e852902fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.205014 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32781487-aa7d-4011-9901-2f3e852902fc-kube-api-access-btn2c" (OuterVolumeSpecName: "kube-api-access-btn2c") pod "32781487-aa7d-4011-9901-2f3e852902fc" (UID: "32781487-aa7d-4011-9901-2f3e852902fc"). InnerVolumeSpecName "kube-api-access-btn2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.221532 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32781487-aa7d-4011-9901-2f3e852902fc" (UID: "32781487-aa7d-4011-9901-2f3e852902fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.300563 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.300820 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btn2c\" (UniqueName: \"kubernetes.io/projected/32781487-aa7d-4011-9901-2f3e852902fc-kube-api-access-btn2c\") on node \"crc\" DevicePath \"\"" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.300830 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.537542 4820 generic.go:334] "Generic (PLEG): container finished" podID="32781487-aa7d-4011-9901-2f3e852902fc" containerID="204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209" exitCode=0 Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.537607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gb7ng" event={"ID":"32781487-aa7d-4011-9901-2f3e852902fc","Type":"ContainerDied","Data":"204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209"} Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.537641 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gb7ng" event={"ID":"32781487-aa7d-4011-9901-2f3e852902fc","Type":"ContainerDied","Data":"c5b9b2036cb79cdd9ab9e450798a60fdb109f05ea61c9d939fe1251a5af2e168"} Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.537680 4820 scope.go:117] "RemoveContainer" containerID="204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.538116 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.589919 4820 scope.go:117] "RemoveContainer" containerID="288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.591225 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gb7ng"] Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.605152 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gb7ng"] Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.621682 4820 scope.go:117] "RemoveContainer" containerID="2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.665114 4820 scope.go:117] "RemoveContainer" containerID="204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209" Feb 21 09:21:55 crc kubenswrapper[4820]: E0221 09:21:55.665640 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209\": container with ID starting with 204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209 not found: ID does not exist" containerID="204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.665679 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209"} err="failed to get container status \"204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209\": rpc error: code = NotFound desc = could not find container \"204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209\": container with ID starting with 204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209 not found: ID does not exist" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.665704 4820 scope.go:117] "RemoveContainer" containerID="288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6" Feb 21 09:21:55 crc kubenswrapper[4820]: E0221 09:21:55.666623 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6\": container with ID starting with 288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6 not found: ID does not exist" containerID="288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.666725 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6"} err="failed to get container status \"288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6\": rpc error: code = NotFound desc = could not find container \"288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6\": container with ID starting with 288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6 not found: ID does not exist" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.666780 4820 scope.go:117] "RemoveContainer" containerID="2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a" Feb 21 09:21:55 crc kubenswrapper[4820]: E0221 09:21:55.667298 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a\": container with ID starting with 2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a not found: ID does not exist" containerID="2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.667369 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a"} err="failed to get container status \"2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a\": rpc error: code = NotFound desc = could not find container \"2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a\": container with ID starting with 2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a not found: ID does not exist" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.713971 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32781487-aa7d-4011-9901-2f3e852902fc" path="/var/lib/kubelet/pods/32781487-aa7d-4011-9901-2f3e852902fc/volumes" Feb 21 09:22:43 crc kubenswrapper[4820]: I0221 09:22:43.816798 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:22:43 crc kubenswrapper[4820]: I0221 09:22:43.817770 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:23:13 crc kubenswrapper[4820]: I0221 09:23:13.816828 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:23:13 crc kubenswrapper[4820]: I0221 09:23:13.817467 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:23:43 crc kubenswrapper[4820]: I0221 09:23:43.816044 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:23:43 crc kubenswrapper[4820]: I0221 09:23:43.816565 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:23:43 crc kubenswrapper[4820]: I0221 09:23:43.816836 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 09:23:43 crc kubenswrapper[4820]: I0221 09:23:43.817684 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1812e93a7fc1b49a1b4e92bb64b23e3ffb3d863faf7ace601e07e65ec966779b"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 09:23:43 crc kubenswrapper[4820]: I0221 09:23:43.817742 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://1812e93a7fc1b49a1b4e92bb64b23e3ffb3d863faf7ace601e07e65ec966779b" gracePeriod=600 Feb 21 09:23:44 crc kubenswrapper[4820]: I0221 09:23:44.634420 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="1812e93a7fc1b49a1b4e92bb64b23e3ffb3d863faf7ace601e07e65ec966779b" exitCode=0 Feb 21 09:23:44 crc kubenswrapper[4820]: I0221 09:23:44.634544 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"1812e93a7fc1b49a1b4e92bb64b23e3ffb3d863faf7ace601e07e65ec966779b"} Feb 21 09:23:44 crc kubenswrapper[4820]: I0221 09:23:44.634750 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c"} Feb 21 09:23:44 crc kubenswrapper[4820]: I0221 09:23:44.634776 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:24:38 crc kubenswrapper[4820]: I0221 09:24:38.415770 4820 generic.go:334] "Generic (PLEG): container finished" podID="417782d7-a42e-4872-9e2d-0f11848812cd" containerID="ab6120679bb44c551ad880ba2cc6a7b2086118bf8f465825a332cb5176e9c344" exitCode=0 Feb 21 09:24:38 crc kubenswrapper[4820]: I0221 09:24:38.415869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"417782d7-a42e-4872-9e2d-0f11848812cd","Type":"ContainerDied","Data":"ab6120679bb44c551ad880ba2cc6a7b2086118bf8f465825a332cb5176e9c344"} Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.871768 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.873971 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ca-certs\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874114 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config-secret\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874292 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-config-data\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874400 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-workdir\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874447 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ssh-key\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874513 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-temporary\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874578 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874682 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwptg\" (UniqueName: \"kubernetes.io/projected/417782d7-a42e-4872-9e2d-0f11848812cd-kube-api-access-jwptg\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874783 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.875848 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.876222 4820 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.878009 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-config-data" (OuterVolumeSpecName: "config-data") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.879976 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.885472 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/417782d7-a42e-4872-9e2d-0f11848812cd-kube-api-access-jwptg" (OuterVolumeSpecName: "kube-api-access-jwptg") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "kube-api-access-jwptg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.885571 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.908985 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.913143 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.929230 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.961943 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978034 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978069 4820 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978080 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978111 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978121 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwptg\" (UniqueName: \"kubernetes.io/projected/417782d7-a42e-4872-9e2d-0f11848812cd-kube-api-access-jwptg\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978130 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978140 4820 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978148 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:40 crc kubenswrapper[4820]: I0221 09:24:40.000557 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 21 09:24:40 crc kubenswrapper[4820]: I0221 09:24:40.079570 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:40 crc kubenswrapper[4820]: I0221 09:24:40.465742 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 21 09:24:40 crc kubenswrapper[4820]: I0221 09:24:40.466336 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"417782d7-a42e-4872-9e2d-0f11848812cd","Type":"ContainerDied","Data":"7e5ff066704f3e33a6c2fdd8d04c5c80690a10bd358cca7bf49443c234af864d"} Feb 21 09:24:40 crc kubenswrapper[4820]: I0221 09:24:40.466390 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e5ff066704f3e33a6c2fdd8d04c5c80690a10bd358cca7bf49443c234af864d" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.742507 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 21 09:24:49 crc kubenswrapper[4820]: E0221 09:24:49.743467 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="registry-server" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.743484 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="registry-server" Feb 21 09:24:49 crc kubenswrapper[4820]: E0221 09:24:49.743503 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417782d7-a42e-4872-9e2d-0f11848812cd" containerName="tempest-tests-tempest-tests-runner" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.743511 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="417782d7-a42e-4872-9e2d-0f11848812cd" containerName="tempest-tests-tempest-tests-runner" Feb 21 09:24:49 crc kubenswrapper[4820]: E0221 09:24:49.743526 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="extract-content" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.743534 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="extract-content" Feb 21 09:24:49 crc kubenswrapper[4820]: E0221 09:24:49.743551 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="extract-utilities" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.743559 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="extract-utilities" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.743805 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="417782d7-a42e-4872-9e2d-0f11848812cd" containerName="tempest-tests-tempest-tests-runner" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.743828 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="registry-server" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.744673 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.747222 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ccs7x" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.755089 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.882348 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnrdz\" (UniqueName: \"kubernetes.io/projected/fe061f11-8b08-455e-856c-ac81ff40d655-kube-api-access-lnrdz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fe061f11-8b08-455e-856c-ac81ff40d655\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.882500 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fe061f11-8b08-455e-856c-ac81ff40d655\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.986315 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnrdz\" (UniqueName: \"kubernetes.io/projected/fe061f11-8b08-455e-856c-ac81ff40d655-kube-api-access-lnrdz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fe061f11-8b08-455e-856c-ac81ff40d655\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.986389 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fe061f11-8b08-455e-856c-ac81ff40d655\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.986975 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fe061f11-8b08-455e-856c-ac81ff40d655\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:50 crc kubenswrapper[4820]: I0221 09:24:50.010641 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnrdz\" (UniqueName: \"kubernetes.io/projected/fe061f11-8b08-455e-856c-ac81ff40d655-kube-api-access-lnrdz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fe061f11-8b08-455e-856c-ac81ff40d655\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:50 crc kubenswrapper[4820]: I0221 09:24:50.032256 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fe061f11-8b08-455e-856c-ac81ff40d655\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:50 crc kubenswrapper[4820]: I0221 09:24:50.078560 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:50 crc kubenswrapper[4820]: I0221 09:24:50.516655 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 21 09:24:50 crc kubenswrapper[4820]: I0221 09:24:50.576952 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fe061f11-8b08-455e-856c-ac81ff40d655","Type":"ContainerStarted","Data":"a5dbd82e616fdc4fd12a1549520390a0c44b537a8b2a12bc309335a3d842f8e2"} Feb 21 09:24:51 crc kubenswrapper[4820]: I0221 09:24:51.590050 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fe061f11-8b08-455e-856c-ac81ff40d655","Type":"ContainerStarted","Data":"2156cd57541c7a07479a2cfd0914a51b3444204c2c6a7eeebff0a3ac3dbc0405"} Feb 21 09:24:51 crc kubenswrapper[4820]: I0221 09:24:51.616069 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.774489695 podStartE2EDuration="2.61605332s" podCreationTimestamp="2026-02-21 09:24:49 +0000 UTC" firstStartedPulling="2026-02-21 09:24:50.526041357 +0000 UTC m=+9465.559125555" lastFinishedPulling="2026-02-21 09:24:51.367604982 +0000 UTC m=+9466.400689180" observedRunningTime="2026-02-21 09:24:51.611796565 +0000 UTC m=+9466.644880763" watchObservedRunningTime="2026-02-21 09:24:51.61605332 +0000 UTC m=+9466.649137508" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.040757 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j7d4d/must-gather-qvcwh"] Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.047037 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.050465 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-j7d4d"/"kube-root-ca.crt" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.050992 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-j7d4d"/"openshift-service-ca.crt" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.057332 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-j7d4d"/"default-dockercfg-rxswn" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.069093 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-j7d4d/must-gather-qvcwh"] Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.162074 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq9cg\" (UniqueName: \"kubernetes.io/projected/a1900ff3-6f36-49ff-88d2-898da25c3385-kube-api-access-gq9cg\") pod \"must-gather-qvcwh\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.162326 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a1900ff3-6f36-49ff-88d2-898da25c3385-must-gather-output\") pod \"must-gather-qvcwh\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.263534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a1900ff3-6f36-49ff-88d2-898da25c3385-must-gather-output\") pod \"must-gather-qvcwh\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.263615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq9cg\" (UniqueName: \"kubernetes.io/projected/a1900ff3-6f36-49ff-88d2-898da25c3385-kube-api-access-gq9cg\") pod \"must-gather-qvcwh\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.263977 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a1900ff3-6f36-49ff-88d2-898da25c3385-must-gather-output\") pod \"must-gather-qvcwh\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.282860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq9cg\" (UniqueName: \"kubernetes.io/projected/a1900ff3-6f36-49ff-88d2-898da25c3385-kube-api-access-gq9cg\") pod \"must-gather-qvcwh\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.382914 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:57 crc kubenswrapper[4820]: I0221 09:25:57.451535 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-j7d4d/must-gather-qvcwh"] Feb 21 09:25:58 crc kubenswrapper[4820]: I0221 09:25:58.238760 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" event={"ID":"a1900ff3-6f36-49ff-88d2-898da25c3385","Type":"ContainerStarted","Data":"b9cfef5032282ef1236e5e565e8e043337b305d7782be455af5d55dd83ba79e9"} Feb 21 09:26:04 crc kubenswrapper[4820]: I0221 09:26:04.321861 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" event={"ID":"a1900ff3-6f36-49ff-88d2-898da25c3385","Type":"ContainerStarted","Data":"5b05abff72f7a6214e6796b0e752b52807cb2023e6916ddd56daecc4e8528351"} Feb 21 09:26:04 crc kubenswrapper[4820]: I0221 09:26:04.322424 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" event={"ID":"a1900ff3-6f36-49ff-88d2-898da25c3385","Type":"ContainerStarted","Data":"4bcf628d58a10209d2b868562aaef36528866f8053f65575b0ffa6aa1295907f"} Feb 21 09:26:04 crc kubenswrapper[4820]: I0221 09:26:04.340894 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" podStartSLOduration=3.142390143 podStartE2EDuration="9.340871694s" podCreationTimestamp="2026-02-21 09:25:55 +0000 UTC" firstStartedPulling="2026-02-21 09:25:57.455002092 +0000 UTC m=+9532.488086290" lastFinishedPulling="2026-02-21 09:26:03.653483643 +0000 UTC m=+9538.686567841" observedRunningTime="2026-02-21 09:26:04.336622478 +0000 UTC m=+9539.369706676" watchObservedRunningTime="2026-02-21 09:26:04.340871694 +0000 UTC m=+9539.373955892" Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.674200 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-kxw2c"] Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.676402 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.721939 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6cwc\" (UniqueName: \"kubernetes.io/projected/564eff3b-59b2-4e16-af32-03335f96da2f-kube-api-access-g6cwc\") pod \"crc-debug-kxw2c\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.722018 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/564eff3b-59b2-4e16-af32-03335f96da2f-host\") pod \"crc-debug-kxw2c\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.830106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6cwc\" (UniqueName: \"kubernetes.io/projected/564eff3b-59b2-4e16-af32-03335f96da2f-kube-api-access-g6cwc\") pod \"crc-debug-kxw2c\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.830226 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/564eff3b-59b2-4e16-af32-03335f96da2f-host\") pod \"crc-debug-kxw2c\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.830659 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/564eff3b-59b2-4e16-af32-03335f96da2f-host\") pod \"crc-debug-kxw2c\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.853744 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6cwc\" (UniqueName: \"kubernetes.io/projected/564eff3b-59b2-4e16-af32-03335f96da2f-kube-api-access-g6cwc\") pod \"crc-debug-kxw2c\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:08 crc kubenswrapper[4820]: I0221 09:26:08.002150 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:08 crc kubenswrapper[4820]: W0221 09:26:08.051870 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod564eff3b_59b2_4e16_af32_03335f96da2f.slice/crio-efc380030d1820246e5f26755d01a041d24299fd8a7727a2acb6409fdb6ed864 WatchSource:0}: Error finding container efc380030d1820246e5f26755d01a041d24299fd8a7727a2acb6409fdb6ed864: Status 404 returned error can't find the container with id efc380030d1820246e5f26755d01a041d24299fd8a7727a2acb6409fdb6ed864 Feb 21 09:26:08 crc kubenswrapper[4820]: I0221 09:26:08.369542 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" event={"ID":"564eff3b-59b2-4e16-af32-03335f96da2f","Type":"ContainerStarted","Data":"efc380030d1820246e5f26755d01a041d24299fd8a7727a2acb6409fdb6ed864"} Feb 21 09:26:13 crc kubenswrapper[4820]: I0221 09:26:13.815917 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:26:13 crc kubenswrapper[4820]: I0221 09:26:13.816390 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:26:18 crc kubenswrapper[4820]: I0221 09:26:18.468638 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" event={"ID":"564eff3b-59b2-4e16-af32-03335f96da2f","Type":"ContainerStarted","Data":"26e8d8f4d7a26bb699737ee8b8b8730e21ba3829bf9fc70bc39b2ed950c95f97"} Feb 21 09:26:18 crc kubenswrapper[4820]: I0221 09:26:18.490666 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" podStartSLOduration=2.17072135 podStartE2EDuration="11.490632579s" podCreationTimestamp="2026-02-21 09:26:07 +0000 UTC" firstStartedPulling="2026-02-21 09:26:08.053827385 +0000 UTC m=+9543.086911583" lastFinishedPulling="2026-02-21 09:26:17.373738614 +0000 UTC m=+9552.406822812" observedRunningTime="2026-02-21 09:26:18.486386384 +0000 UTC m=+9553.519470582" watchObservedRunningTime="2026-02-21 09:26:18.490632579 +0000 UTC m=+9553.523731697" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.088108 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vsvw6"] Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.091499 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.112408 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsvw6"] Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.264411 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-utilities\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.264476 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-catalog-content\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.264552 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfn7x\" (UniqueName: \"kubernetes.io/projected/20fccb1d-70aa-48f7-a4e6-79411b1641f6-kube-api-access-cfn7x\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.366629 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfn7x\" (UniqueName: \"kubernetes.io/projected/20fccb1d-70aa-48f7-a4e6-79411b1641f6-kube-api-access-cfn7x\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.367286 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-utilities\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.367433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-catalog-content\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.367785 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-utilities\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.367931 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-catalog-content\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.386674 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfn7x\" (UniqueName: \"kubernetes.io/projected/20fccb1d-70aa-48f7-a4e6-79411b1641f6-kube-api-access-cfn7x\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.419802 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:40 crc kubenswrapper[4820]: I0221 09:26:40.072038 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsvw6"] Feb 21 09:26:40 crc kubenswrapper[4820]: I0221 09:26:40.700326 4820 generic.go:334] "Generic (PLEG): container finished" podID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerID="39d2f75cf2e8ea82cea4326b00c53a8b4b8c6ce1687eb4a6a0abb049bcba2750" exitCode=0 Feb 21 09:26:40 crc kubenswrapper[4820]: I0221 09:26:40.700443 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsvw6" event={"ID":"20fccb1d-70aa-48f7-a4e6-79411b1641f6","Type":"ContainerDied","Data":"39d2f75cf2e8ea82cea4326b00c53a8b4b8c6ce1687eb4a6a0abb049bcba2750"} Feb 21 09:26:40 crc kubenswrapper[4820]: I0221 09:26:40.700878 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsvw6" event={"ID":"20fccb1d-70aa-48f7-a4e6-79411b1641f6","Type":"ContainerStarted","Data":"5994a148d0ba6fc8e753c05577c030b45c9f3f4db18a5b2394bf8ac0120b2fb0"} Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.353585 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dnrdx"] Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.356330 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.365432 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dnrdx"] Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.520136 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-utilities\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.520630 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-catalog-content\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.520687 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2r8p\" (UniqueName: \"kubernetes.io/projected/762a9a96-5afb-4798-9e70-7f385fe215ba-kube-api-access-t2r8p\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.755364 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-utilities\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.755459 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-catalog-content\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.755525 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2r8p\" (UniqueName: \"kubernetes.io/projected/762a9a96-5afb-4798-9e70-7f385fe215ba-kube-api-access-t2r8p\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.756478 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-utilities\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.756775 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-catalog-content\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:42 crc kubenswrapper[4820]: I0221 09:26:42.425637 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2r8p\" (UniqueName: \"kubernetes.io/projected/762a9a96-5afb-4798-9e70-7f385fe215ba-kube-api-access-t2r8p\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:42 crc kubenswrapper[4820]: I0221 09:26:42.588154 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:42 crc kubenswrapper[4820]: I0221 09:26:42.892743 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsvw6" event={"ID":"20fccb1d-70aa-48f7-a4e6-79411b1641f6","Type":"ContainerStarted","Data":"4672f03b848f9034817a5956a566519966c81d9baaf16e7086f90cb38cc2bfc5"} Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.143147 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dnrdx"] Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.815773 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.816137 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.904932 4820 generic.go:334] "Generic (PLEG): container finished" podID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerID="4672f03b848f9034817a5956a566519966c81d9baaf16e7086f90cb38cc2bfc5" exitCode=0 Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.904989 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsvw6" event={"ID":"20fccb1d-70aa-48f7-a4e6-79411b1641f6","Type":"ContainerDied","Data":"4672f03b848f9034817a5956a566519966c81d9baaf16e7086f90cb38cc2bfc5"} Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.907173 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.910534 4820 generic.go:334] "Generic (PLEG): container finished" podID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerID="59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702" exitCode=0 Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.910573 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnrdx" event={"ID":"762a9a96-5afb-4798-9e70-7f385fe215ba","Type":"ContainerDied","Data":"59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702"} Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.910602 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnrdx" event={"ID":"762a9a96-5afb-4798-9e70-7f385fe215ba","Type":"ContainerStarted","Data":"6e7c8195813b95501a26d888a217c17c6d993ce9063ff83f248bfb25d3f41950"} Feb 21 09:26:44 crc kubenswrapper[4820]: I0221 09:26:44.923836 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsvw6" event={"ID":"20fccb1d-70aa-48f7-a4e6-79411b1641f6","Type":"ContainerStarted","Data":"591d47b6af5411e9300cb469c2baa6c99f76ced3819a371e6f9015a6aeb70713"} Feb 21 09:26:44 crc kubenswrapper[4820]: I0221 09:26:44.951311 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vsvw6" podStartSLOduration=2.371411252 podStartE2EDuration="5.951291613s" podCreationTimestamp="2026-02-21 09:26:39 +0000 UTC" firstStartedPulling="2026-02-21 09:26:40.703952863 +0000 UTC m=+9575.737037061" lastFinishedPulling="2026-02-21 09:26:44.283833214 +0000 UTC m=+9579.316917422" observedRunningTime="2026-02-21 09:26:44.940454798 +0000 UTC m=+9579.973539006" watchObservedRunningTime="2026-02-21 09:26:44.951291613 +0000 UTC m=+9579.984375811" Feb 21 09:26:45 crc kubenswrapper[4820]: I0221 09:26:45.935975 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnrdx" event={"ID":"762a9a96-5afb-4798-9e70-7f385fe215ba","Type":"ContainerStarted","Data":"d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391"} Feb 21 09:26:48 crc kubenswrapper[4820]: I0221 09:26:48.965684 4820 generic.go:334] "Generic (PLEG): container finished" podID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerID="d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391" exitCode=0 Feb 21 09:26:48 crc kubenswrapper[4820]: I0221 09:26:48.965732 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnrdx" event={"ID":"762a9a96-5afb-4798-9e70-7f385fe215ba","Type":"ContainerDied","Data":"d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391"} Feb 21 09:26:49 crc kubenswrapper[4820]: I0221 09:26:49.420385 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:49 crc kubenswrapper[4820]: I0221 09:26:49.420711 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:49 crc kubenswrapper[4820]: I0221 09:26:49.489797 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:49 crc kubenswrapper[4820]: I0221 09:26:49.977663 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnrdx" event={"ID":"762a9a96-5afb-4798-9e70-7f385fe215ba","Type":"ContainerStarted","Data":"171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb"} Feb 21 09:26:50 crc kubenswrapper[4820]: I0221 09:26:50.035320 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:50 crc kubenswrapper[4820]: I0221 09:26:50.057632 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dnrdx" podStartSLOduration=3.646077359 podStartE2EDuration="9.05761096s" podCreationTimestamp="2026-02-21 09:26:41 +0000 UTC" firstStartedPulling="2026-02-21 09:26:43.912522323 +0000 UTC m=+9578.945606521" lastFinishedPulling="2026-02-21 09:26:49.324055924 +0000 UTC m=+9584.357140122" observedRunningTime="2026-02-21 09:26:50.000658041 +0000 UTC m=+9585.033742259" watchObservedRunningTime="2026-02-21 09:26:50.05761096 +0000 UTC m=+9585.090695158" Feb 21 09:26:50 crc kubenswrapper[4820]: I0221 09:26:50.876388 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vsvw6"] Feb 21 09:26:51 crc kubenswrapper[4820]: I0221 09:26:51.992870 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vsvw6" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="registry-server" containerID="cri-o://591d47b6af5411e9300cb469c2baa6c99f76ced3819a371e6f9015a6aeb70713" gracePeriod=2 Feb 21 09:26:52 crc kubenswrapper[4820]: I0221 09:26:52.589415 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:52 crc kubenswrapper[4820]: I0221 09:26:52.589465 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:53 crc kubenswrapper[4820]: I0221 09:26:53.644010 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dnrdx" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="registry-server" probeResult="failure" output=< Feb 21 09:26:53 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:26:53 crc kubenswrapper[4820]: > Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.024596 4820 generic.go:334] "Generic (PLEG): container finished" podID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerID="591d47b6af5411e9300cb469c2baa6c99f76ced3819a371e6f9015a6aeb70713" exitCode=0 Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.024773 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsvw6" event={"ID":"20fccb1d-70aa-48f7-a4e6-79411b1641f6","Type":"ContainerDied","Data":"591d47b6af5411e9300cb469c2baa6c99f76ced3819a371e6f9015a6aeb70713"} Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.798414 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.823278 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfn7x\" (UniqueName: \"kubernetes.io/projected/20fccb1d-70aa-48f7-a4e6-79411b1641f6-kube-api-access-cfn7x\") pod \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.823344 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-catalog-content\") pod \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.823409 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-utilities\") pod \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.824323 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-utilities" (OuterVolumeSpecName: "utilities") pod "20fccb1d-70aa-48f7-a4e6-79411b1641f6" (UID: "20fccb1d-70aa-48f7-a4e6-79411b1641f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.848467 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20fccb1d-70aa-48f7-a4e6-79411b1641f6-kube-api-access-cfn7x" (OuterVolumeSpecName: "kube-api-access-cfn7x") pod "20fccb1d-70aa-48f7-a4e6-79411b1641f6" (UID: "20fccb1d-70aa-48f7-a4e6-79411b1641f6"). InnerVolumeSpecName "kube-api-access-cfn7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.891723 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20fccb1d-70aa-48f7-a4e6-79411b1641f6" (UID: "20fccb1d-70aa-48f7-a4e6-79411b1641f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.926224 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfn7x\" (UniqueName: \"kubernetes.io/projected/20fccb1d-70aa-48f7-a4e6-79411b1641f6-kube-api-access-cfn7x\") on node \"crc\" DevicePath \"\"" Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.926270 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.926280 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:26:56 crc kubenswrapper[4820]: I0221 09:26:56.045490 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsvw6" event={"ID":"20fccb1d-70aa-48f7-a4e6-79411b1641f6","Type":"ContainerDied","Data":"5994a148d0ba6fc8e753c05577c030b45c9f3f4db18a5b2394bf8ac0120b2fb0"} Feb 21 09:26:56 crc kubenswrapper[4820]: I0221 09:26:56.045572 4820 scope.go:117] "RemoveContainer" containerID="591d47b6af5411e9300cb469c2baa6c99f76ced3819a371e6f9015a6aeb70713" Feb 21 09:26:56 crc kubenswrapper[4820]: I0221 09:26:56.045593 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:56 crc kubenswrapper[4820]: I0221 09:26:56.079080 4820 scope.go:117] "RemoveContainer" containerID="4672f03b848f9034817a5956a566519966c81d9baaf16e7086f90cb38cc2bfc5" Feb 21 09:26:56 crc kubenswrapper[4820]: I0221 09:26:56.098699 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vsvw6"] Feb 21 09:26:56 crc kubenswrapper[4820]: I0221 09:26:56.110430 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vsvw6"] Feb 21 09:26:56 crc kubenswrapper[4820]: I0221 09:26:56.199437 4820 scope.go:117] "RemoveContainer" containerID="39d2f75cf2e8ea82cea4326b00c53a8b4b8c6ce1687eb4a6a0abb049bcba2750" Feb 21 09:26:57 crc kubenswrapper[4820]: I0221 09:26:57.707919 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" path="/var/lib/kubelet/pods/20fccb1d-70aa-48f7-a4e6-79411b1641f6/volumes" Feb 21 09:27:02 crc kubenswrapper[4820]: I0221 09:27:02.656724 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:27:02 crc kubenswrapper[4820]: I0221 09:27:02.704502 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:27:02 crc kubenswrapper[4820]: I0221 09:27:02.894257 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dnrdx"] Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.117654 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dnrdx" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="registry-server" containerID="cri-o://171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb" gracePeriod=2 Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.596468 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.700206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2r8p\" (UniqueName: \"kubernetes.io/projected/762a9a96-5afb-4798-9e70-7f385fe215ba-kube-api-access-t2r8p\") pod \"762a9a96-5afb-4798-9e70-7f385fe215ba\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.700700 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-utilities\") pod \"762a9a96-5afb-4798-9e70-7f385fe215ba\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.700895 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-catalog-content\") pod \"762a9a96-5afb-4798-9e70-7f385fe215ba\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.701328 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-utilities" (OuterVolumeSpecName: "utilities") pod "762a9a96-5afb-4798-9e70-7f385fe215ba" (UID: "762a9a96-5afb-4798-9e70-7f385fe215ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.701850 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.704920 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/762a9a96-5afb-4798-9e70-7f385fe215ba-kube-api-access-t2r8p" (OuterVolumeSpecName: "kube-api-access-t2r8p") pod "762a9a96-5afb-4798-9e70-7f385fe215ba" (UID: "762a9a96-5afb-4798-9e70-7f385fe215ba"). InnerVolumeSpecName "kube-api-access-t2r8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.804045 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2r8p\" (UniqueName: \"kubernetes.io/projected/762a9a96-5afb-4798-9e70-7f385fe215ba-kube-api-access-t2r8p\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.818055 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "762a9a96-5afb-4798-9e70-7f385fe215ba" (UID: "762a9a96-5afb-4798-9e70-7f385fe215ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.909302 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.132103 4820 generic.go:334] "Generic (PLEG): container finished" podID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerID="171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb" exitCode=0 Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.132157 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnrdx" event={"ID":"762a9a96-5afb-4798-9e70-7f385fe215ba","Type":"ContainerDied","Data":"171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb"} Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.132187 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnrdx" event={"ID":"762a9a96-5afb-4798-9e70-7f385fe215ba","Type":"ContainerDied","Data":"6e7c8195813b95501a26d888a217c17c6d993ce9063ff83f248bfb25d3f41950"} Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.132204 4820 scope.go:117] "RemoveContainer" containerID="171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.132387 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.172906 4820 scope.go:117] "RemoveContainer" containerID="d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.177701 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dnrdx"] Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.192349 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dnrdx"] Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.204764 4820 scope.go:117] "RemoveContainer" containerID="59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.253856 4820 scope.go:117] "RemoveContainer" containerID="171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb" Feb 21 09:27:05 crc kubenswrapper[4820]: E0221 09:27:05.254383 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb\": container with ID starting with 171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb not found: ID does not exist" containerID="171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.254451 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb"} err="failed to get container status \"171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb\": rpc error: code = NotFound desc = could not find container \"171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb\": container with ID starting with 171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb not found: ID does not exist" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.254491 4820 scope.go:117] "RemoveContainer" containerID="d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391" Feb 21 09:27:05 crc kubenswrapper[4820]: E0221 09:27:05.255136 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391\": container with ID starting with d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391 not found: ID does not exist" containerID="d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.255232 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391"} err="failed to get container status \"d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391\": rpc error: code = NotFound desc = could not find container \"d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391\": container with ID starting with d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391 not found: ID does not exist" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.255326 4820 scope.go:117] "RemoveContainer" containerID="59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702" Feb 21 09:27:05 crc kubenswrapper[4820]: E0221 09:27:05.255742 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702\": container with ID starting with 59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702 not found: ID does not exist" containerID="59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.255782 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702"} err="failed to get container status \"59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702\": rpc error: code = NotFound desc = could not find container \"59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702\": container with ID starting with 59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702 not found: ID does not exist" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.713190 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" path="/var/lib/kubelet/pods/762a9a96-5afb-4798-9e70-7f385fe215ba/volumes" Feb 21 09:27:13 crc kubenswrapper[4820]: I0221 09:27:13.816778 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:27:13 crc kubenswrapper[4820]: I0221 09:27:13.818001 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:27:13 crc kubenswrapper[4820]: I0221 09:27:13.818080 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 09:27:13 crc kubenswrapper[4820]: I0221 09:27:13.819138 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 09:27:13 crc kubenswrapper[4820]: I0221 09:27:13.819220 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" gracePeriod=600 Feb 21 09:27:13 crc kubenswrapper[4820]: E0221 09:27:13.954833 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:27:14 crc kubenswrapper[4820]: I0221 09:27:14.221897 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" exitCode=0 Feb 21 09:27:14 crc kubenswrapper[4820]: I0221 09:27:14.221967 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c"} Feb 21 09:27:14 crc kubenswrapper[4820]: I0221 09:27:14.222000 4820 scope.go:117] "RemoveContainer" containerID="1812e93a7fc1b49a1b4e92bb64b23e3ffb3d863faf7ace601e07e65ec966779b" Feb 21 09:27:14 crc kubenswrapper[4820]: I0221 09:27:14.222691 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:27:14 crc kubenswrapper[4820]: E0221 09:27:14.223011 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:27:14 crc kubenswrapper[4820]: I0221 09:27:14.223767 4820 generic.go:334] "Generic (PLEG): container finished" podID="564eff3b-59b2-4e16-af32-03335f96da2f" containerID="26e8d8f4d7a26bb699737ee8b8b8730e21ba3829bf9fc70bc39b2ed950c95f97" exitCode=0 Feb 21 09:27:14 crc kubenswrapper[4820]: I0221 09:27:14.223803 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" event={"ID":"564eff3b-59b2-4e16-af32-03335f96da2f","Type":"ContainerDied","Data":"26e8d8f4d7a26bb699737ee8b8b8730e21ba3829bf9fc70bc39b2ed950c95f97"} Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.347048 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.383003 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-kxw2c"] Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.392799 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-kxw2c"] Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.522930 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6cwc\" (UniqueName: \"kubernetes.io/projected/564eff3b-59b2-4e16-af32-03335f96da2f-kube-api-access-g6cwc\") pod \"564eff3b-59b2-4e16-af32-03335f96da2f\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.523082 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/564eff3b-59b2-4e16-af32-03335f96da2f-host\") pod \"564eff3b-59b2-4e16-af32-03335f96da2f\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.523211 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/564eff3b-59b2-4e16-af32-03335f96da2f-host" (OuterVolumeSpecName: "host") pod "564eff3b-59b2-4e16-af32-03335f96da2f" (UID: "564eff3b-59b2-4e16-af32-03335f96da2f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.523654 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/564eff3b-59b2-4e16-af32-03335f96da2f-host\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.528606 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564eff3b-59b2-4e16-af32-03335f96da2f-kube-api-access-g6cwc" (OuterVolumeSpecName: "kube-api-access-g6cwc") pod "564eff3b-59b2-4e16-af32-03335f96da2f" (UID: "564eff3b-59b2-4e16-af32-03335f96da2f"). InnerVolumeSpecName "kube-api-access-g6cwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.625037 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6cwc\" (UniqueName: \"kubernetes.io/projected/564eff3b-59b2-4e16-af32-03335f96da2f-kube-api-access-g6cwc\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.711768 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="564eff3b-59b2-4e16-af32-03335f96da2f" path="/var/lib/kubelet/pods/564eff3b-59b2-4e16-af32-03335f96da2f/volumes" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.250961 4820 scope.go:117] "RemoveContainer" containerID="26e8d8f4d7a26bb699737ee8b8b8730e21ba3829bf9fc70bc39b2ed950c95f97" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.251025 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.556767 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-cnf8c"] Feb 21 09:27:16 crc kubenswrapper[4820]: E0221 09:27:16.557168 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="extract-content" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557180 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="extract-content" Feb 21 09:27:16 crc kubenswrapper[4820]: E0221 09:27:16.557194 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="extract-utilities" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557200 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="extract-utilities" Feb 21 09:27:16 crc kubenswrapper[4820]: E0221 09:27:16.557211 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="registry-server" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557217 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="registry-server" Feb 21 09:27:16 crc kubenswrapper[4820]: E0221 09:27:16.557227 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564eff3b-59b2-4e16-af32-03335f96da2f" containerName="container-00" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557246 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="564eff3b-59b2-4e16-af32-03335f96da2f" containerName="container-00" Feb 21 09:27:16 crc kubenswrapper[4820]: E0221 09:27:16.557267 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="extract-utilities" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557273 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="extract-utilities" Feb 21 09:27:16 crc kubenswrapper[4820]: E0221 09:27:16.557293 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="extract-content" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557299 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="extract-content" Feb 21 09:27:16 crc kubenswrapper[4820]: E0221 09:27:16.557317 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="registry-server" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557322 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="registry-server" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557505 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="registry-server" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557536 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="registry-server" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557547 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="564eff3b-59b2-4e16-af32-03335f96da2f" containerName="container-00" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.558219 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.747524 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8mq5\" (UniqueName: \"kubernetes.io/projected/de2bf205-3558-4ec5-bea0-da1be48389d3-kube-api-access-d8mq5\") pod \"crc-debug-cnf8c\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.747809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de2bf205-3558-4ec5-bea0-da1be48389d3-host\") pod \"crc-debug-cnf8c\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.850229 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8mq5\" (UniqueName: \"kubernetes.io/projected/de2bf205-3558-4ec5-bea0-da1be48389d3-kube-api-access-d8mq5\") pod \"crc-debug-cnf8c\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.850328 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de2bf205-3558-4ec5-bea0-da1be48389d3-host\") pod \"crc-debug-cnf8c\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.850519 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de2bf205-3558-4ec5-bea0-da1be48389d3-host\") pod \"crc-debug-cnf8c\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.869097 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8mq5\" (UniqueName: \"kubernetes.io/projected/de2bf205-3558-4ec5-bea0-da1be48389d3-kube-api-access-d8mq5\") pod \"crc-debug-cnf8c\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.876081 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:17 crc kubenswrapper[4820]: I0221 09:27:17.262164 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" event={"ID":"de2bf205-3558-4ec5-bea0-da1be48389d3","Type":"ContainerStarted","Data":"8a5a16f3ed4eb8c982e9fa5dc44916a814bbff0058422a3d82bec9581623f831"} Feb 21 09:27:17 crc kubenswrapper[4820]: I0221 09:27:17.262755 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" event={"ID":"de2bf205-3558-4ec5-bea0-da1be48389d3","Type":"ContainerStarted","Data":"5b92906d94e5a8e08553c92ea3dd1a5ffb9353aadeece8d948bc43af9cb01e9c"} Feb 21 09:27:17 crc kubenswrapper[4820]: I0221 09:27:17.283464 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" podStartSLOduration=1.28344655 podStartE2EDuration="1.28344655s" podCreationTimestamp="2026-02-21 09:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 09:27:17.273605002 +0000 UTC m=+9612.306689200" watchObservedRunningTime="2026-02-21 09:27:17.28344655 +0000 UTC m=+9612.316530748" Feb 21 09:27:18 crc kubenswrapper[4820]: I0221 09:27:18.273134 4820 generic.go:334] "Generic (PLEG): container finished" podID="de2bf205-3558-4ec5-bea0-da1be48389d3" containerID="8a5a16f3ed4eb8c982e9fa5dc44916a814bbff0058422a3d82bec9581623f831" exitCode=0 Feb 21 09:27:18 crc kubenswrapper[4820]: I0221 09:27:18.273170 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" event={"ID":"de2bf205-3558-4ec5-bea0-da1be48389d3","Type":"ContainerDied","Data":"8a5a16f3ed4eb8c982e9fa5dc44916a814bbff0058422a3d82bec9581623f831"} Feb 21 09:27:19 crc kubenswrapper[4820]: I0221 09:27:19.394351 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:19 crc kubenswrapper[4820]: I0221 09:27:19.507797 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de2bf205-3558-4ec5-bea0-da1be48389d3-host\") pod \"de2bf205-3558-4ec5-bea0-da1be48389d3\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " Feb 21 09:27:19 crc kubenswrapper[4820]: I0221 09:27:19.507863 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de2bf205-3558-4ec5-bea0-da1be48389d3-host" (OuterVolumeSpecName: "host") pod "de2bf205-3558-4ec5-bea0-da1be48389d3" (UID: "de2bf205-3558-4ec5-bea0-da1be48389d3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 09:27:19 crc kubenswrapper[4820]: I0221 09:27:19.508104 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8mq5\" (UniqueName: \"kubernetes.io/projected/de2bf205-3558-4ec5-bea0-da1be48389d3-kube-api-access-d8mq5\") pod \"de2bf205-3558-4ec5-bea0-da1be48389d3\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " Feb 21 09:27:19 crc kubenswrapper[4820]: I0221 09:27:19.508560 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de2bf205-3558-4ec5-bea0-da1be48389d3-host\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:19 crc kubenswrapper[4820]: I0221 09:27:19.515007 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2bf205-3558-4ec5-bea0-da1be48389d3-kube-api-access-d8mq5" (OuterVolumeSpecName: "kube-api-access-d8mq5") pod "de2bf205-3558-4ec5-bea0-da1be48389d3" (UID: "de2bf205-3558-4ec5-bea0-da1be48389d3"). InnerVolumeSpecName "kube-api-access-d8mq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:27:19 crc kubenswrapper[4820]: I0221 09:27:19.611383 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8mq5\" (UniqueName: \"kubernetes.io/projected/de2bf205-3558-4ec5-bea0-da1be48389d3-kube-api-access-d8mq5\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:20 crc kubenswrapper[4820]: I0221 09:27:20.238058 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-cnf8c"] Feb 21 09:27:20 crc kubenswrapper[4820]: I0221 09:27:20.247535 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-cnf8c"] Feb 21 09:27:20 crc kubenswrapper[4820]: I0221 09:27:20.291752 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b92906d94e5a8e08553c92ea3dd1a5ffb9353aadeece8d948bc43af9cb01e9c" Feb 21 09:27:20 crc kubenswrapper[4820]: I0221 09:27:20.291802 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.466227 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-lzdkq"] Feb 21 09:27:21 crc kubenswrapper[4820]: E0221 09:27:21.467164 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2bf205-3558-4ec5-bea0-da1be48389d3" containerName="container-00" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.467187 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2bf205-3558-4ec5-bea0-da1be48389d3" containerName="container-00" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.467695 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="de2bf205-3558-4ec5-bea0-da1be48389d3" containerName="container-00" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.468760 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.656990 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xxsn\" (UniqueName: \"kubernetes.io/projected/5c9fee18-cd2d-4504-baf5-9759037795cd-kube-api-access-4xxsn\") pod \"crc-debug-lzdkq\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.657399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9fee18-cd2d-4504-baf5-9759037795cd-host\") pod \"crc-debug-lzdkq\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.710560 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de2bf205-3558-4ec5-bea0-da1be48389d3" path="/var/lib/kubelet/pods/de2bf205-3558-4ec5-bea0-da1be48389d3/volumes" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.759996 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9fee18-cd2d-4504-baf5-9759037795cd-host\") pod \"crc-debug-lzdkq\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.760120 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9fee18-cd2d-4504-baf5-9759037795cd-host\") pod \"crc-debug-lzdkq\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.760146 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xxsn\" (UniqueName: \"kubernetes.io/projected/5c9fee18-cd2d-4504-baf5-9759037795cd-kube-api-access-4xxsn\") pod \"crc-debug-lzdkq\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.794072 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xxsn\" (UniqueName: \"kubernetes.io/projected/5c9fee18-cd2d-4504-baf5-9759037795cd-kube-api-access-4xxsn\") pod \"crc-debug-lzdkq\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:22 crc kubenswrapper[4820]: I0221 09:27:22.092068 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:22 crc kubenswrapper[4820]: W0221 09:27:22.134226 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c9fee18_cd2d_4504_baf5_9759037795cd.slice/crio-3c4066be45f1a5eca580ba5d04d4b37d569e8ecbbfd20114091ed136e518134d WatchSource:0}: Error finding container 3c4066be45f1a5eca580ba5d04d4b37d569e8ecbbfd20114091ed136e518134d: Status 404 returned error can't find the container with id 3c4066be45f1a5eca580ba5d04d4b37d569e8ecbbfd20114091ed136e518134d Feb 21 09:27:22 crc kubenswrapper[4820]: I0221 09:27:22.319392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" event={"ID":"5c9fee18-cd2d-4504-baf5-9759037795cd","Type":"ContainerStarted","Data":"3c4066be45f1a5eca580ba5d04d4b37d569e8ecbbfd20114091ed136e518134d"} Feb 21 09:27:23 crc kubenswrapper[4820]: I0221 09:27:23.329699 4820 generic.go:334] "Generic (PLEG): container finished" podID="5c9fee18-cd2d-4504-baf5-9759037795cd" containerID="e1e3226d565bdc1b5e1bcafd76465c8d1d2592a3fc0bd3fe18e178ce2f470ed4" exitCode=0 Feb 21 09:27:23 crc kubenswrapper[4820]: I0221 09:27:23.329754 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" event={"ID":"5c9fee18-cd2d-4504-baf5-9759037795cd","Type":"ContainerDied","Data":"e1e3226d565bdc1b5e1bcafd76465c8d1d2592a3fc0bd3fe18e178ce2f470ed4"} Feb 21 09:27:23 crc kubenswrapper[4820]: I0221 09:27:23.370176 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-lzdkq"] Feb 21 09:27:23 crc kubenswrapper[4820]: I0221 09:27:23.381852 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-lzdkq"] Feb 21 09:27:24 crc kubenswrapper[4820]: I0221 09:27:24.464520 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:24 crc kubenswrapper[4820]: I0221 09:27:24.624095 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9fee18-cd2d-4504-baf5-9759037795cd-host\") pod \"5c9fee18-cd2d-4504-baf5-9759037795cd\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " Feb 21 09:27:24 crc kubenswrapper[4820]: I0221 09:27:24.624288 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c9fee18-cd2d-4504-baf5-9759037795cd-host" (OuterVolumeSpecName: "host") pod "5c9fee18-cd2d-4504-baf5-9759037795cd" (UID: "5c9fee18-cd2d-4504-baf5-9759037795cd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 09:27:24 crc kubenswrapper[4820]: I0221 09:27:24.624482 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xxsn\" (UniqueName: \"kubernetes.io/projected/5c9fee18-cd2d-4504-baf5-9759037795cd-kube-api-access-4xxsn\") pod \"5c9fee18-cd2d-4504-baf5-9759037795cd\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " Feb 21 09:27:24 crc kubenswrapper[4820]: I0221 09:27:24.625154 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9fee18-cd2d-4504-baf5-9759037795cd-host\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:24 crc kubenswrapper[4820]: I0221 09:27:24.630176 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9fee18-cd2d-4504-baf5-9759037795cd-kube-api-access-4xxsn" (OuterVolumeSpecName: "kube-api-access-4xxsn") pod "5c9fee18-cd2d-4504-baf5-9759037795cd" (UID: "5c9fee18-cd2d-4504-baf5-9759037795cd"). InnerVolumeSpecName "kube-api-access-4xxsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:27:24 crc kubenswrapper[4820]: I0221 09:27:24.726699 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xxsn\" (UniqueName: \"kubernetes.io/projected/5c9fee18-cd2d-4504-baf5-9759037795cd-kube-api-access-4xxsn\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:25 crc kubenswrapper[4820]: I0221 09:27:25.350726 4820 scope.go:117] "RemoveContainer" containerID="e1e3226d565bdc1b5e1bcafd76465c8d1d2592a3fc0bd3fe18e178ce2f470ed4" Feb 21 09:27:25 crc kubenswrapper[4820]: I0221 09:27:25.350880 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:25 crc kubenswrapper[4820]: I0221 09:27:25.708981 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c9fee18-cd2d-4504-baf5-9759037795cd" path="/var/lib/kubelet/pods/5c9fee18-cd2d-4504-baf5-9759037795cd/volumes" Feb 21 09:27:26 crc kubenswrapper[4820]: I0221 09:27:26.697138 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:27:26 crc kubenswrapper[4820]: E0221 09:27:26.698060 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:27:41 crc kubenswrapper[4820]: I0221 09:27:41.697173 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:27:41 crc kubenswrapper[4820]: E0221 09:27:41.698053 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:27:52 crc kubenswrapper[4820]: I0221 09:27:52.696819 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:27:52 crc kubenswrapper[4820]: E0221 09:27:52.697670 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:28:05 crc kubenswrapper[4820]: I0221 09:28:05.703214 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:28:05 crc kubenswrapper[4820]: E0221 09:28:05.704104 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:28:20 crc kubenswrapper[4820]: I0221 09:28:20.696628 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:28:20 crc kubenswrapper[4820]: E0221 09:28:20.697260 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:28:35 crc kubenswrapper[4820]: I0221 09:28:35.705009 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:28:35 crc kubenswrapper[4820]: E0221 09:28:35.705766 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:28:47 crc kubenswrapper[4820]: I0221 09:28:47.697697 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:28:47 crc kubenswrapper[4820]: E0221 09:28:47.698893 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:29:01 crc kubenswrapper[4820]: I0221 09:29:01.697326 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:29:01 crc kubenswrapper[4820]: E0221 09:29:01.699067 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:29:12 crc kubenswrapper[4820]: I0221 09:29:12.683197 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0042658c-e832-4073-894f-78a25bcdb5f9/init-config-reloader/0.log" Feb 21 09:29:12 crc kubenswrapper[4820]: I0221 09:29:12.839463 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0042658c-e832-4073-894f-78a25bcdb5f9/init-config-reloader/0.log" Feb 21 09:29:12 crc kubenswrapper[4820]: I0221 09:29:12.887207 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0042658c-e832-4073-894f-78a25bcdb5f9/alertmanager/0.log" Feb 21 09:29:12 crc kubenswrapper[4820]: I0221 09:29:12.908093 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0042658c-e832-4073-894f-78a25bcdb5f9/config-reloader/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.054844 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_77710997-adc1-48de-a5bd-d2e00959d510/aodh-api/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.106127 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_77710997-adc1-48de-a5bd-d2e00959d510/aodh-listener/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.110549 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_77710997-adc1-48de-a5bd-d2e00959d510/aodh-evaluator/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.227604 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_77710997-adc1-48de-a5bd-d2e00959d510/aodh-notifier/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.268808 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5cf69c945b-fsc4w_08d7d55d-2b0b-40fe-9b1c-5930358bebe8/barbican-api/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.287885 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5cf69c945b-fsc4w_08d7d55d-2b0b-40fe-9b1c-5930358bebe8/barbican-api-log/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.446610 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-754674bd8d-6lxjs_d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4/barbican-keystone-listener/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.644831 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-769cf6fd65-dfls2_c1f442bc-072b-483e-8821-3ee262e5aa4e/barbican-worker/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.699077 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:29:13 crc kubenswrapper[4820]: E0221 09:29:13.700531 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.762463 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-769cf6fd65-dfls2_c1f442bc-072b-483e-8821-3ee262e5aa4e/barbican-worker-log/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.893440 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-h8h82_b328f114-e2a2-4fe6-9e6d-bf8a99364733/bootstrap-openstack-openstack-cell1/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.899637 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-754674bd8d-6lxjs_d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4/barbican-keystone-listener-log/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.056116 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_26462812-349d-4dc0-ac4b-3d89ebeb997c/ceilometer-central-agent/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.098171 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_26462812-349d-4dc0-ac4b-3d89ebeb997c/ceilometer-notification-agent/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.112783 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_26462812-349d-4dc0-ac4b-3d89ebeb997c/proxy-httpd/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.254360 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_26462812-349d-4dc0-ac4b-3d89ebeb997c/sg-core/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.320671 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a23af3b4-b486-43b2-b02c-da7b8937e091/cinder-api-log/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.334017 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a23af3b4-b486-43b2-b02c-da7b8937e091/cinder-api/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.545317 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_77665b9b-37d6-4277-a75b-e30637b4b269/cinder-scheduler/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.560262 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_77665b9b-37d6-4277-a75b-e30637b4b269/probe/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.764852 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-hs6l2_979ca93e-175b-4fde-b503-0be2b59e1a99/configure-network-openstack-openstack-cell1/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.826285 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-cggfb_ceace068-0023-4d48-b24d-30cafb14db01/configure-os-openstack-openstack-cell1/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.924980 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6dfc499f-dvr9b_6c431de9-6c4a-4279-a63a-bd6742fc68f0/init/0.log" Feb 21 09:29:15 crc kubenswrapper[4820]: I0221 09:29:15.145920 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6dfc499f-dvr9b_6c431de9-6c4a-4279-a63a-bd6742fc68f0/init/0.log" Feb 21 09:29:15 crc kubenswrapper[4820]: I0221 09:29:15.162944 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-bdzjs_26d06bf4-eb66-4688-a6ba-292af8a3b9f5/download-cache-openstack-openstack-cell1/0.log" Feb 21 09:29:15 crc kubenswrapper[4820]: I0221 09:29:15.181462 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6dfc499f-dvr9b_6c431de9-6c4a-4279-a63a-bd6742fc68f0/dnsmasq-dns/0.log" Feb 21 09:29:15 crc kubenswrapper[4820]: I0221 09:29:15.370524 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c/glance-httpd/0.log" Feb 21 09:29:15 crc kubenswrapper[4820]: I0221 09:29:15.411983 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c/glance-log/0.log" Feb 21 09:29:15 crc kubenswrapper[4820]: I0221 09:29:15.593055 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_8b461284-e512-4b62-95ae-fc82b119c340/glance-httpd/0.log" Feb 21 09:29:15 crc kubenswrapper[4820]: I0221 09:29:15.600759 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_8b461284-e512-4b62-95ae-fc82b119c340/glance-log/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.036405 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7fbc8dc6-rvrvw_f98ac827-2c89-4d1b-afc3-a5bd668b5d60/heat-engine/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.089571 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-c9d48c7f5-9ghjf_55b82e21-7221-4043-b9a8-5ac5853acaa1/heat-api/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.321470 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5879b888bd-q5njq_d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6/horizon/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.463479 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-pdbm6_ddf72439-0ca3-4cbc-8186-fe74744a71e4/install-certs-openstack-openstack-cell1/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.502372 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-d46b7f59f-tgv4t_c1f86beb-e638-4e60-a435-b09e2c01e733/heat-cfnapi/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.649765 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-79fjr_8f2548bf-793b-464b-9659-2962669f353e/install-os-openstack-openstack-cell1/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.752088 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29527741-49n79_7c3e367e-0369-46eb-8886-a7d40b0a6626/keystone-cron/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.812352 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5879b888bd-q5njq_d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6/horizon-log/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.992820 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_478142ab-f7fa-4bbd-9051-6d1f5e16a9e2/kube-state-metrics/0.log" Feb 21 09:29:17 crc kubenswrapper[4820]: I0221 09:29:17.195698 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-vxt45_d646e04b-4083-4b58-a73f-47c72ba78dcc/libvirt-openstack-openstack-cell1/0.log" Feb 21 09:29:17 crc kubenswrapper[4820]: I0221 09:29:17.665494 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-67f7f95649-vvsjb_546bedfc-a666-471b-9a9f-e4f4dd1e629e/neutron-httpd/0.log" Feb 21 09:29:17 crc kubenswrapper[4820]: I0221 09:29:17.692424 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-fcdf4b996-mcbdr_1f763cab-817e-415e-bb73-4e077fa0c745/keystone-api/0.log" Feb 21 09:29:17 crc kubenswrapper[4820]: I0221 09:29:17.894410 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-brfqb_0e84eaf9-2cd2-457c-b532-d632db99ba6e/neutron-dhcp-openstack-openstack-cell1/0.log" Feb 21 09:29:17 crc kubenswrapper[4820]: I0221 09:29:17.939468 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-67f7f95649-vvsjb_546bedfc-a666-471b-9a9f-e4f4dd1e629e/neutron-api/0.log" Feb 21 09:29:17 crc kubenswrapper[4820]: I0221 09:29:17.963451 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-49ck6_915c12d6-5a69-4e4b-a001-b9e865d4377b/neutron-metadata-openstack-openstack-cell1/0.log" Feb 21 09:29:18 crc kubenswrapper[4820]: I0221 09:29:18.256437 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-z2jbg_f751ca69-8835-4c27-b4ab-9dac973aacd6/neutron-sriov-openstack-openstack-cell1/0.log" Feb 21 09:29:18 crc kubenswrapper[4820]: I0221 09:29:18.605036 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d4de2ed9-8828-4c5e-af1e-24c752565d74/nova-cell0-conductor-conductor/0.log" Feb 21 09:29:18 crc kubenswrapper[4820]: I0221 09:29:18.675855 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_eae0a5ff-41ba-4522-a7f0-e69ff23ee566/nova-api-log/0.log" Feb 21 09:29:18 crc kubenswrapper[4820]: I0221 09:29:18.839234 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_eae0a5ff-41ba-4522-a7f0-e69ff23ee566/nova-api-api/0.log" Feb 21 09:29:18 crc kubenswrapper[4820]: I0221 09:29:18.963986 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1747f740-f880-4c19-817b-c9341c1179e7/nova-cell1-conductor-conductor/0.log" Feb 21 09:29:19 crc kubenswrapper[4820]: I0221 09:29:19.183529 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fb1fd00e-e5fe-4977-91db-dc6b86e63e34/nova-cell1-novncproxy-novncproxy/0.log" Feb 21 09:29:19 crc kubenswrapper[4820]: I0221 09:29:19.279508 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5_2666b573-2e76-4374-9fd9-39ac7aabddef/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Feb 21 09:29:19 crc kubenswrapper[4820]: I0221 09:29:19.435913 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-w4sqf_c653de2c-8672-42fb-81c0-4e66975a3b8f/nova-cell1-openstack-openstack-cell1/0.log" Feb 21 09:29:19 crc kubenswrapper[4820]: I0221 09:29:19.590372 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_77c9db30-edab-4679-a671-15ae25d6448b/nova-metadata-log/0.log" Feb 21 09:29:19 crc kubenswrapper[4820]: I0221 09:29:19.841789 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4d1667b0-00cb-4768-97cb-de0ee527f829/nova-scheduler-scheduler/0.log" Feb 21 09:29:19 crc kubenswrapper[4820]: I0221 09:29:19.980852 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e0a14fdd-7df9-4cac-aa21-b4562f320fcc/mysql-bootstrap/0.log" Feb 21 09:29:20 crc kubenswrapper[4820]: I0221 09:29:20.231668 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_77c9db30-edab-4679-a671-15ae25d6448b/nova-metadata-metadata/0.log" Feb 21 09:29:20 crc kubenswrapper[4820]: I0221 09:29:20.465199 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e0a14fdd-7df9-4cac-aa21-b4562f320fcc/mysql-bootstrap/0.log" Feb 21 09:29:20 crc kubenswrapper[4820]: I0221 09:29:20.471812 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e0a14fdd-7df9-4cac-aa21-b4562f320fcc/galera/0.log" Feb 21 09:29:20 crc kubenswrapper[4820]: I0221 09:29:20.558578 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_21d2b3a6-8a28-4287-8953-23782681799a/mysql-bootstrap/0.log" Feb 21 09:29:20 crc kubenswrapper[4820]: I0221 09:29:20.717846 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_21d2b3a6-8a28-4287-8953-23782681799a/mysql-bootstrap/0.log" Feb 21 09:29:20 crc kubenswrapper[4820]: I0221 09:29:20.754133 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_21d2b3a6-8a28-4287-8953-23782681799a/galera/0.log" Feb 21 09:29:20 crc kubenswrapper[4820]: I0221 09:29:20.849782 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c888e608-8215-44cd-a30b-43b1c34b5685/openstackclient/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.021024 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f9b120b4-ea8d-499d-a8ca-43faa31f000e/ovn-northd/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.063112 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f9b120b4-ea8d-499d-a8ca-43faa31f000e/openstack-network-exporter/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.233421 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0292096a-9b13-475a-971c-cf4dae1a3f8f/openstack-network-exporter/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.280367 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-hxv8b_7b3e6252-4e79-4ce6-87f1-8b0e8c885536/ovn-openstack-openstack-cell1/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.385328 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0292096a-9b13-475a-971c-cf4dae1a3f8f/ovsdbserver-nb/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.488495 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37/openstack-network-exporter/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.574579 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37/ovsdbserver-nb/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.686040 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c7377f38-4907-4b1d-a339-f274c122ef5c/openstack-network-exporter/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.698394 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c7377f38-4907-4b1d-a339-f274c122ef5c/ovsdbserver-nb/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.925666 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6aaa256c-7102-4960-ade0-b903b29b2716/ovsdbserver-sb/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.932598 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6aaa256c-7102-4960-ade0-b903b29b2716/openstack-network-exporter/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.114080 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_66a6723b-ff49-4d22-a6cd-1e9509165729/ovsdbserver-sb/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.123170 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_66a6723b-ff49-4d22-a6cd-1e9509165729/openstack-network-exporter/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.234335 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_dcf6ab13-da71-49ec-b2dc-27602f1a953f/openstack-network-exporter/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.354257 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_dcf6ab13-da71-49ec-b2dc-27602f1a953f/ovsdbserver-sb/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.539916 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64bd48f99b-s6zl2_924c1ab4-a83b-4ab0-9c80-b77489d668f7/placement-api/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.623220 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64bd48f99b-s6zl2_924c1ab4-a83b-4ab0-9c80-b77489d668f7/placement-log/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.659200 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n_d5f7b8c5-1ad0-4d18-bf56-89197679507f/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.852724 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0c81808a-06e3-4353-b7a6-56ff53f15b69/init-config-reloader/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.078091 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0c81808a-06e3-4353-b7a6-56ff53f15b69/thanos-sidecar/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.081782 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0c81808a-06e3-4353-b7a6-56ff53f15b69/config-reloader/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.084751 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0c81808a-06e3-4353-b7a6-56ff53f15b69/init-config-reloader/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.114952 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0c81808a-06e3-4353-b7a6-56ff53f15b69/prometheus/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.311008 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_57d094d7-d5d2-4276-b0c2-cb98a15c0c3d/setup-container/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.474144 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_57d094d7-d5d2-4276-b0c2-cb98a15c0c3d/setup-container/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.542374 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_57d094d7-d5d2-4276-b0c2-cb98a15c0c3d/rabbitmq/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.554562 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8195e98f-70c8-4758-9d0a-e3a95de45075/setup-container/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.803851 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8195e98f-70c8-4758-9d0a-e3a95de45075/setup-container/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.879405 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-42cjk_4449546f-cb82-4976-b53e-cad851a6369d/reboot-os-openstack-openstack-cell1/0.log" Feb 21 09:29:24 crc kubenswrapper[4820]: I0221 09:29:24.339602 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-57cnm_4ade5366-52be-4c8f-b9e2-1088b04caa90/run-os-openstack-openstack-cell1/0.log" Feb 21 09:29:24 crc kubenswrapper[4820]: I0221 09:29:24.576058 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-4pwnt_2090d99c-7240-49ef-85d8-187c0cd6c146/ssh-known-hosts-openstack/0.log" Feb 21 09:29:24 crc kubenswrapper[4820]: I0221 09:29:24.816593 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-cc65c7f54-9sg96_1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d/proxy-server/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.016957 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-8nnn7_867214ab-adcb-4e78-838b-a16cda8f543c/swift-ring-rebalance/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.031195 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-cc65c7f54-9sg96_1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d/proxy-httpd/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.278224 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-wpbzs_dab763aa-fd5e-41b2-96d8-f758ad76f779/telemetry-openstack-openstack-cell1/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.389868 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8195e98f-70c8-4758-9d0a-e3a95de45075/rabbitmq/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.479332 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_417782d7-a42e-4872-9e2d-0f11848812cd/tempest-tests-tempest-tests-runner/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.539146 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_fe061f11-8b08-455e-856c-ac81ff40d655/test-operator-logs-container/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.797514 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk_8acec915-5e23-4212-9bce-50fec475c433/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.839525 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-wn9jn_15b9de10-7535-4310-9681-2d0171fb4376/validate-network-openstack-openstack-cell1/0.log" Feb 21 09:29:28 crc kubenswrapper[4820]: I0221 09:29:28.697219 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:29:28 crc kubenswrapper[4820]: E0221 09:29:28.697719 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:29:39 crc kubenswrapper[4820]: I0221 09:29:39.220502 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_4c039fd9-87df-497c-8e40-f9b5d2759d0f/memcached/0.log" Feb 21 09:29:42 crc kubenswrapper[4820]: I0221 09:29:42.697301 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:29:42 crc kubenswrapper[4820]: E0221 09:29:42.698094 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.633253 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff_47790790-d956-41e0-8868-9fb9fecfefe7/util/0.log" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.697212 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:29:57 crc kubenswrapper[4820]: E0221 09:29:57.697566 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.786904 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff_47790790-d956-41e0-8868-9fb9fecfefe7/pull/0.log" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.800180 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff_47790790-d956-41e0-8868-9fb9fecfefe7/util/0.log" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.818153 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff_47790790-d956-41e0-8868-9fb9fecfefe7/pull/0.log" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.962171 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff_47790790-d956-41e0-8868-9fb9fecfefe7/pull/0.log" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.967214 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff_47790790-d956-41e0-8868-9fb9fecfefe7/util/0.log" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.984655 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff_47790790-d956-41e0-8868-9fb9fecfefe7/extract/0.log" Feb 21 09:29:58 crc kubenswrapper[4820]: I0221 09:29:58.402493 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-7fq9h_f8b2e5d3-e795-4971-92d9-f0d8f6586fa8/manager/0.log" Feb 21 09:29:58 crc kubenswrapper[4820]: I0221 09:29:58.826852 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-gbtvh_f8cd79d8-6ba2-467c-95b5-4d965d73ed75/manager/0.log" Feb 21 09:29:58 crc kubenswrapper[4820]: I0221 09:29:58.959666 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-tlx7z_a4f64d1a-4768-48e1-8a88-fbf906956528/manager/0.log" Feb 21 09:29:59 crc kubenswrapper[4820]: I0221 09:29:59.163007 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-t6t6b_7ab15a3b-5688-4d42-b99a-e88bb8b11f65/manager/0.log" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.160286 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t"] Feb 21 09:30:00 crc kubenswrapper[4820]: E0221 09:30:00.161209 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9fee18-cd2d-4504-baf5-9759037795cd" containerName="container-00" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.161225 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9fee18-cd2d-4504-baf5-9759037795cd" containerName="container-00" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.161434 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9fee18-cd2d-4504-baf5-9759037795cd" containerName="container-00" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.176935 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.179820 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.180052 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.192540 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t"] Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.206885 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-fj4tn_4f343be8-a654-43ac-938a-6b726caab1ad/manager/0.log" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.300718 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-config-volume\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.301036 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt4qg\" (UniqueName: \"kubernetes.io/projected/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-kube-api-access-kt4qg\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.301178 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-secret-volume\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.402686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-secret-volume\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.402827 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-config-volume\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.402932 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt4qg\" (UniqueName: \"kubernetes.io/projected/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-kube-api-access-kt4qg\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.404732 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-config-volume\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.411402 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-secret-volume\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.419802 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-qvl8t_2ae82741-a73e-4d45-852f-a206550cb1e9/manager/0.log" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.426782 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt4qg\" (UniqueName: \"kubernetes.io/projected/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-kube-api-access-kt4qg\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.510804 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:01 crc kubenswrapper[4820]: I0221 09:30:01.254565 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t"] Feb 21 09:30:01 crc kubenswrapper[4820]: I0221 09:30:01.255904 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-pbn9f_047df55d-9730-4215-bbd5-73fd59a0e9f5/manager/0.log" Feb 21 09:30:01 crc kubenswrapper[4820]: I0221 09:30:01.257837 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-lgdx6_903ed1dc-819c-4ed9-86f6-ca32e4f96792/manager/0.log" Feb 21 09:30:01 crc kubenswrapper[4820]: I0221 09:30:01.983691 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-gxpq6_b248c78b-0213-4833-8d04-7d2514c2e673/manager/0.log" Feb 21 09:30:02 crc kubenswrapper[4820]: I0221 09:30:02.207287 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-lzhqv_9ec17569-aac1-4b58-8efc-b5a483e47a71/manager/0.log" Feb 21 09:30:02 crc kubenswrapper[4820]: I0221 09:30:02.268409 4820 generic.go:334] "Generic (PLEG): container finished" podID="a4db266a-21eb-4ca0-bdb5-af37ccd720d9" containerID="c2a97cd59dc207b1a15728870570ab992ee2d7a09fce3da6d739cd4c1be9594f" exitCode=0 Feb 21 09:30:02 crc kubenswrapper[4820]: I0221 09:30:02.268667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" event={"ID":"a4db266a-21eb-4ca0-bdb5-af37ccd720d9","Type":"ContainerDied","Data":"c2a97cd59dc207b1a15728870570ab992ee2d7a09fce3da6d739cd4c1be9594f"} Feb 21 09:30:02 crc kubenswrapper[4820]: I0221 09:30:02.269228 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" event={"ID":"a4db266a-21eb-4ca0-bdb5-af37ccd720d9","Type":"ContainerStarted","Data":"a0da9765a9760da0bb06dd06484a2f0660c3ef51d3403aa53bb70b85d05e147c"} Feb 21 09:30:02 crc kubenswrapper[4820]: I0221 09:30:02.578803 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-c96wv_2b4b6741-5442-4ef0-a8e1-49e389157cd4/manager/0.log" Feb 21 09:30:02 crc kubenswrapper[4820]: I0221 09:30:02.778337 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf_c4453479-1bc9-4393-8853-396ec6ae4f7f/manager/0.log" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.235778 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-lx4sd_3c9c6322-ba57-47b3-a079-ab86a6660c45/manager/0.log" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.343171 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6679bf9b57-l85mk_b7cb4a9f-82fd-41b1-8175-351de45fde99/operator/0.log" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.610188 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.763874 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-config-volume\") pod \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.764039 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt4qg\" (UniqueName: \"kubernetes.io/projected/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-kube-api-access-kt4qg\") pod \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.764172 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-secret-volume\") pod \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.765185 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4db266a-21eb-4ca0-bdb5-af37ccd720d9" (UID: "a4db266a-21eb-4ca0-bdb5-af37ccd720d9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.769965 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-kube-api-access-kt4qg" (OuterVolumeSpecName: "kube-api-access-kt4qg") pod "a4db266a-21eb-4ca0-bdb5-af37ccd720d9" (UID: "a4db266a-21eb-4ca0-bdb5-af37ccd720d9"). InnerVolumeSpecName "kube-api-access-kt4qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.776515 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4db266a-21eb-4ca0-bdb5-af37ccd720d9" (UID: "a4db266a-21eb-4ca0-bdb5-af37ccd720d9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.837707 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-lrgjm_76209e29-400d-4677-85b5-89c5f4e9323a/manager/0.log" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.869429 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt4qg\" (UniqueName: \"kubernetes.io/projected/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-kube-api-access-kt4qg\") on node \"crc\" DevicePath \"\"" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.869461 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.869470 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.881569 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-tt62z_2af934a2-6680-4932-b3af-5f8bdee6c740/registry-server/0.log" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.003105 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-2dfxn_9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1/manager/0.log" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.141484 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-n6dpn_18cf798f-3eea-4e15-8bb1-bda4895ffed4/manager/0.log" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.288797 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" event={"ID":"a4db266a-21eb-4ca0-bdb5-af37ccd720d9","Type":"ContainerDied","Data":"a0da9765a9760da0bb06dd06484a2f0660c3ef51d3403aa53bb70b85d05e147c"} Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.288846 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0da9765a9760da0bb06dd06484a2f0660c3ef51d3403aa53bb70b85d05e147c" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.288908 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.404939 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wv5gr_fde95ed3-63bc-4401-b8b8-539da71db026/operator/0.log" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.503363 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-cv9cl_412bd84a-46bb-49b9-8d0a-17d6cc683ea0/manager/0.log" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.684884 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8"] Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.699215 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8"] Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.841456 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-whrpt_b425a24f-112c-4e36-a173-21a59ce15ef0/manager/0.log" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.988934 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-jdxhc_246cc20b-aa24-4c15-8eb7-659e10b21e92/manager/0.log" Feb 21 09:30:05 crc kubenswrapper[4820]: I0221 09:30:05.035402 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-jt2g2_ee323e4c-82c4-4b71-b69b-5aef22e36516/manager/0.log" Feb 21 09:30:05 crc kubenswrapper[4820]: I0221 09:30:05.709351 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e44294e9-1a1b-421f-bed6-f72a8bb45e1d" path="/var/lib/kubelet/pods/e44294e9-1a1b-421f-bed6-f72a8bb45e1d/volumes" Feb 21 09:30:07 crc kubenswrapper[4820]: I0221 09:30:07.007679 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69ff7bc449-f84fz_5424a0f0-819f-46e7-9d7d-00bbe249e4a9/manager/0.log" Feb 21 09:30:07 crc kubenswrapper[4820]: I0221 09:30:07.328626 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-54dzd_d922fcc6-f8a7-451a-b998-fc04189a6d85/manager/0.log" Feb 21 09:30:12 crc kubenswrapper[4820]: I0221 09:30:12.696795 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:30:12 crc kubenswrapper[4820]: E0221 09:30:12.697422 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:30:24 crc kubenswrapper[4820]: I0221 09:30:24.697330 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:30:24 crc kubenswrapper[4820]: E0221 09:30:24.697977 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:30:25 crc kubenswrapper[4820]: I0221 09:30:25.717531 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zl5zd_3b64a6e2-e14a-4de0-8630-e617a55b0794/control-plane-machine-set-operator/0.log" Feb 21 09:30:25 crc kubenswrapper[4820]: I0221 09:30:25.919819 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zz4sx_8add43c0-9280-4e92-b4fe-4628eb645e56/kube-rbac-proxy/0.log" Feb 21 09:30:25 crc kubenswrapper[4820]: I0221 09:30:25.968869 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zz4sx_8add43c0-9280-4e92-b4fe-4628eb645e56/machine-api-operator/0.log" Feb 21 09:30:38 crc kubenswrapper[4820]: I0221 09:30:38.162500 4820 scope.go:117] "RemoveContainer" containerID="d355316426a1db688b7e0f637002731b78bea683453439286ba724dcfa414dc2" Feb 21 09:30:39 crc kubenswrapper[4820]: I0221 09:30:39.002001 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-mf4f6_d35515e4-d029-4f6a-be2a-d7ea32ab06ad/cert-manager-controller/0.log" Feb 21 09:30:39 crc kubenswrapper[4820]: I0221 09:30:39.232651 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-9lkfz_3a53f347-c86d-4ef3-82c2-29549135afe6/cert-manager-cainjector/0.log" Feb 21 09:30:39 crc kubenswrapper[4820]: I0221 09:30:39.233448 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-r5ddv_e88f2404-d287-429a-a995-ea8be7fa5be8/cert-manager-webhook/0.log" Feb 21 09:30:39 crc kubenswrapper[4820]: I0221 09:30:39.697682 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:30:39 crc kubenswrapper[4820]: E0221 09:30:39.698206 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:30:51 crc kubenswrapper[4820]: I0221 09:30:51.698594 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-b5kf2_15902f84-d2f7-42a0-929e-89c21cffddd8/nmstate-console-plugin/0.log" Feb 21 09:30:51 crc kubenswrapper[4820]: I0221 09:30:51.882738 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tz942_a6c76731-bd23-43eb-84f6-84d675965035/nmstate-handler/0.log" Feb 21 09:30:51 crc kubenswrapper[4820]: I0221 09:30:51.983816 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-m6svj_b7930d8a-8ded-4552-9c0a-aa73fa2006e2/kube-rbac-proxy/0.log" Feb 21 09:30:51 crc kubenswrapper[4820]: I0221 09:30:51.993300 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-m6svj_b7930d8a-8ded-4552-9c0a-aa73fa2006e2/nmstate-metrics/0.log" Feb 21 09:30:52 crc kubenswrapper[4820]: I0221 09:30:52.148424 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-4rnft_375887b5-9d2e-4af8-9128-789ebd290f97/nmstate-operator/0.log" Feb 21 09:30:52 crc kubenswrapper[4820]: I0221 09:30:52.198452 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-c8gmp_62b9a00a-9b7e-4057-bc85-2a16c48957f4/nmstate-webhook/0.log" Feb 21 09:30:53 crc kubenswrapper[4820]: I0221 09:30:53.696734 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:30:53 crc kubenswrapper[4820]: E0221 09:30:53.697306 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:31:04 crc kubenswrapper[4820]: I0221 09:31:04.936793 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-lw5b9_b371e087-d814-4a0f-9ff3-d55d20e24544/prometheus-operator/0.log" Feb 21 09:31:05 crc kubenswrapper[4820]: I0221 09:31:05.097496 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-89dc89b99-p7twl_33a57c79-5f59-4436-802e-2be346a7f24b/prometheus-operator-admission-webhook/0.log" Feb 21 09:31:05 crc kubenswrapper[4820]: I0221 09:31:05.116817 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv_6c94be0a-30e4-454d-a744-be2161cdbed2/prometheus-operator-admission-webhook/0.log" Feb 21 09:31:05 crc kubenswrapper[4820]: I0221 09:31:05.295452 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-t74mh_dab6a090-8dce-4a3c-aa4a-467c37f77510/operator/0.log" Feb 21 09:31:05 crc kubenswrapper[4820]: I0221 09:31:05.314415 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-m42j5_33c8ba11-479e-4bbc-87c4-0d6da77be2eb/perses-operator/0.log" Feb 21 09:31:07 crc kubenswrapper[4820]: I0221 09:31:07.697109 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:31:07 crc kubenswrapper[4820]: E0221 09:31:07.697964 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.172090 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-jrcl5_6f342ec6-aed8-48ff-a1ba-9d6634bda927/kube-rbac-proxy/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.447305 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-frr-files/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.584771 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-jrcl5_6f342ec6-aed8-48ff-a1ba-9d6634bda927/controller/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.701212 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:31:19 crc kubenswrapper[4820]: E0221 09:31:19.701489 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.734371 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-reloader/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.762680 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-frr-files/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.765843 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-reloader/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.787093 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-metrics/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.975173 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-frr-files/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.976663 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-reloader/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.003191 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-metrics/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.010412 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-metrics/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.167056 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-reloader/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.197676 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/controller/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.201950 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-frr-files/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.223755 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-metrics/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.402021 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/frr-metrics/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.454555 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/kube-rbac-proxy/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.467383 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/kube-rbac-proxy-frr/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.647163 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/reloader/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.670419 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-jw8nq_a5c8b64a-a6da-435e-a87d-bd397ad045a4/frr-k8s-webhook-server/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.903164 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-9bd6bbfc6-srwvl_bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d/manager/0.log" Feb 21 09:31:21 crc kubenswrapper[4820]: I0221 09:31:21.085678 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-c68698666-cvwrd_e17110d4-51ce-4fca-a5e7-ba4eedeb42a8/webhook-server/0.log" Feb 21 09:31:21 crc kubenswrapper[4820]: I0221 09:31:21.134711 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cwv62_cc577a47-69e2-4ae2-93c1-e922f0c6e3d8/kube-rbac-proxy/0.log" Feb 21 09:31:22 crc kubenswrapper[4820]: I0221 09:31:22.018098 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cwv62_cc577a47-69e2-4ae2-93c1-e922f0c6e3d8/speaker/0.log" Feb 21 09:31:23 crc kubenswrapper[4820]: I0221 09:31:23.537755 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/frr/0.log" Feb 21 09:31:33 crc kubenswrapper[4820]: I0221 09:31:33.549956 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7_a332a364-5157-4e4a-8313-7b267a41ac97/util/0.log" Feb 21 09:31:33 crc kubenswrapper[4820]: I0221 09:31:33.799228 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7_a332a364-5157-4e4a-8313-7b267a41ac97/pull/0.log" Feb 21 09:31:33 crc kubenswrapper[4820]: I0221 09:31:33.812020 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7_a332a364-5157-4e4a-8313-7b267a41ac97/util/0.log" Feb 21 09:31:33 crc kubenswrapper[4820]: I0221 09:31:33.858686 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7_a332a364-5157-4e4a-8313-7b267a41ac97/pull/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.027446 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7_a332a364-5157-4e4a-8313-7b267a41ac97/util/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.063346 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7_a332a364-5157-4e4a-8313-7b267a41ac97/pull/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.244753 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7_a332a364-5157-4e4a-8313-7b267a41ac97/extract/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.393285 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm_f69dedde-7358-4e63-b7b3-cc4ff8c1258e/util/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.423373 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm_f69dedde-7358-4e63-b7b3-cc4ff8c1258e/util/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.457847 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm_f69dedde-7358-4e63-b7b3-cc4ff8c1258e/pull/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.457918 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm_f69dedde-7358-4e63-b7b3-cc4ff8c1258e/pull/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.615535 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm_f69dedde-7358-4e63-b7b3-cc4ff8c1258e/pull/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.655909 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm_f69dedde-7358-4e63-b7b3-cc4ff8c1258e/util/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.656034 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm_f69dedde-7358-4e63-b7b3-cc4ff8c1258e/extract/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.697232 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:31:34 crc kubenswrapper[4820]: E0221 09:31:34.697566 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.798653 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp_2e4047bc-d968-4163-82f1-13cecd18893e/util/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.937428 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp_2e4047bc-d968-4163-82f1-13cecd18893e/util/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.969654 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp_2e4047bc-d968-4163-82f1-13cecd18893e/pull/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.979150 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp_2e4047bc-d968-4163-82f1-13cecd18893e/pull/0.log" Feb 21 09:31:35 crc kubenswrapper[4820]: I0221 09:31:35.169451 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp_2e4047bc-d968-4163-82f1-13cecd18893e/pull/0.log" Feb 21 09:31:35 crc kubenswrapper[4820]: I0221 09:31:35.194816 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp_2e4047bc-d968-4163-82f1-13cecd18893e/util/0.log" Feb 21 09:31:35 crc kubenswrapper[4820]: I0221 09:31:35.197786 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp_2e4047bc-d968-4163-82f1-13cecd18893e/extract/0.log" Feb 21 09:31:35 crc kubenswrapper[4820]: I0221 09:31:35.335319 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9t7gg_c232aa63-d98b-4e40-9efb-00e3eff02b50/extract-utilities/0.log" Feb 21 09:31:35 crc kubenswrapper[4820]: I0221 09:31:35.496134 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9t7gg_c232aa63-d98b-4e40-9efb-00e3eff02b50/extract-content/0.log" Feb 21 09:31:35 crc kubenswrapper[4820]: I0221 09:31:35.500727 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9t7gg_c232aa63-d98b-4e40-9efb-00e3eff02b50/extract-utilities/0.log" Feb 21 09:31:35 crc kubenswrapper[4820]: I0221 09:31:35.545550 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9t7gg_c232aa63-d98b-4e40-9efb-00e3eff02b50/extract-content/0.log" Feb 21 09:31:36 crc kubenswrapper[4820]: I0221 09:31:36.150819 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9t7gg_c232aa63-d98b-4e40-9efb-00e3eff02b50/extract-utilities/0.log" Feb 21 09:31:36 crc kubenswrapper[4820]: I0221 09:31:36.218619 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9t7gg_c232aa63-d98b-4e40-9efb-00e3eff02b50/extract-content/0.log" Feb 21 09:31:36 crc kubenswrapper[4820]: I0221 09:31:36.338548 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8lnd_d5791a2a-f861-4564-b560-cef4e1d2b529/extract-utilities/0.log" Feb 21 09:31:36 crc kubenswrapper[4820]: I0221 09:31:36.663598 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8lnd_d5791a2a-f861-4564-b560-cef4e1d2b529/extract-content/0.log" Feb 21 09:31:36 crc kubenswrapper[4820]: I0221 09:31:36.740625 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8lnd_d5791a2a-f861-4564-b560-cef4e1d2b529/extract-utilities/0.log" Feb 21 09:31:36 crc kubenswrapper[4820]: I0221 09:31:36.754675 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8lnd_d5791a2a-f861-4564-b560-cef4e1d2b529/extract-content/0.log" Feb 21 09:31:36 crc kubenswrapper[4820]: I0221 09:31:36.897912 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8lnd_d5791a2a-f861-4564-b560-cef4e1d2b529/extract-utilities/0.log" Feb 21 09:31:37 crc kubenswrapper[4820]: I0221 09:31:37.040134 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8lnd_d5791a2a-f861-4564-b560-cef4e1d2b529/extract-content/0.log" Feb 21 09:31:37 crc kubenswrapper[4820]: I0221 09:31:37.386426 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx_9e889767-aefe-4149-8677-fd116ae8d598/util/0.log" Feb 21 09:31:37 crc kubenswrapper[4820]: I0221 09:31:37.428859 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9t7gg_c232aa63-d98b-4e40-9efb-00e3eff02b50/registry-server/0.log" Feb 21 09:31:37 crc kubenswrapper[4820]: I0221 09:31:37.499961 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx_9e889767-aefe-4149-8677-fd116ae8d598/util/0.log" Feb 21 09:31:37 crc kubenswrapper[4820]: I0221 09:31:37.593345 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx_9e889767-aefe-4149-8677-fd116ae8d598/pull/0.log" Feb 21 09:31:37 crc kubenswrapper[4820]: I0221 09:31:37.626920 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx_9e889767-aefe-4149-8677-fd116ae8d598/pull/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.172153 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8lnd_d5791a2a-f861-4564-b560-cef4e1d2b529/registry-server/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.196007 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx_9e889767-aefe-4149-8677-fd116ae8d598/util/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.225502 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx_9e889767-aefe-4149-8677-fd116ae8d598/extract/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.232263 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx_9e889767-aefe-4149-8677-fd116ae8d598/pull/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.351288 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wq5r9_37683f41-a9aa-4abd-809d-25df5114e93a/marketplace-operator/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.400597 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78dnb_ef1d43db-e76a-4d34-8528-4c549bcbc2e2/extract-utilities/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.556830 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78dnb_ef1d43db-e76a-4d34-8528-4c549bcbc2e2/extract-content/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.573709 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78dnb_ef1d43db-e76a-4d34-8528-4c549bcbc2e2/extract-utilities/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.588076 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78dnb_ef1d43db-e76a-4d34-8528-4c549bcbc2e2/extract-content/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.757148 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78dnb_ef1d43db-e76a-4d34-8528-4c549bcbc2e2/extract-content/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.779855 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78dnb_ef1d43db-e76a-4d34-8528-4c549bcbc2e2/extract-utilities/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.861076 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-drqmx_fa04064f-b88b-4b27-a882-1cbdae3d4485/extract-utilities/0.log" Feb 21 09:31:39 crc kubenswrapper[4820]: I0221 09:31:39.015911 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-drqmx_fa04064f-b88b-4b27-a882-1cbdae3d4485/extract-utilities/0.log" Feb 21 09:31:39 crc kubenswrapper[4820]: I0221 09:31:39.064406 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-drqmx_fa04064f-b88b-4b27-a882-1cbdae3d4485/extract-content/0.log" Feb 21 09:31:39 crc kubenswrapper[4820]: I0221 09:31:39.091182 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78dnb_ef1d43db-e76a-4d34-8528-4c549bcbc2e2/registry-server/0.log" Feb 21 09:31:39 crc kubenswrapper[4820]: I0221 09:31:39.124038 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-drqmx_fa04064f-b88b-4b27-a882-1cbdae3d4485/extract-content/0.log" Feb 21 09:31:39 crc kubenswrapper[4820]: I0221 09:31:39.267768 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-drqmx_fa04064f-b88b-4b27-a882-1cbdae3d4485/extract-utilities/0.log" Feb 21 09:31:39 crc kubenswrapper[4820]: I0221 09:31:39.305850 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-drqmx_fa04064f-b88b-4b27-a882-1cbdae3d4485/extract-content/0.log" Feb 21 09:31:40 crc kubenswrapper[4820]: I0221 09:31:40.539159 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-drqmx_fa04064f-b88b-4b27-a882-1cbdae3d4485/registry-server/0.log" Feb 21 09:31:45 crc kubenswrapper[4820]: I0221 09:31:45.704553 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:31:45 crc kubenswrapper[4820]: E0221 09:31:45.705684 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:31:51 crc kubenswrapper[4820]: I0221 09:31:51.981100 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-lw5b9_b371e087-d814-4a0f-9ff3-d55d20e24544/prometheus-operator/0.log" Feb 21 09:31:52 crc kubenswrapper[4820]: I0221 09:31:52.010061 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv_6c94be0a-30e4-454d-a744-be2161cdbed2/prometheus-operator-admission-webhook/0.log" Feb 21 09:31:52 crc kubenswrapper[4820]: I0221 09:31:52.024140 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-89dc89b99-p7twl_33a57c79-5f59-4436-802e-2be346a7f24b/prometheus-operator-admission-webhook/0.log" Feb 21 09:31:52 crc kubenswrapper[4820]: I0221 09:31:52.155032 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-t74mh_dab6a090-8dce-4a3c-aa4a-467c37f77510/operator/0.log" Feb 21 09:31:52 crc kubenswrapper[4820]: I0221 09:31:52.224723 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-m42j5_33c8ba11-479e-4bbc-87c4-0d6da77be2eb/perses-operator/0.log" Feb 21 09:31:57 crc kubenswrapper[4820]: E0221 09:31:57.552100 4820 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.201:35156->38.102.83.201:43255: write tcp 38.102.83.201:35156->38.102.83.201:43255: write: broken pipe Feb 21 09:31:57 crc kubenswrapper[4820]: I0221 09:31:57.696981 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:31:57 crc kubenswrapper[4820]: E0221 09:31:57.697579 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.677823 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cdpzw"] Feb 21 09:32:10 crc kubenswrapper[4820]: E0221 09:32:10.678787 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4db266a-21eb-4ca0-bdb5-af37ccd720d9" containerName="collect-profiles" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.678805 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4db266a-21eb-4ca0-bdb5-af37ccd720d9" containerName="collect-profiles" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.679089 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4db266a-21eb-4ca0-bdb5-af37ccd720d9" containerName="collect-profiles" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.680951 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.691822 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdpzw"] Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.696536 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:32:10 crc kubenswrapper[4820]: E0221 09:32:10.696805 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.776558 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxsmw\" (UniqueName: \"kubernetes.io/projected/8d56d629-33f3-48af-b7f5-acc9cd1c206c-kube-api-access-kxsmw\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.777362 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-catalog-content\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.777553 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-utilities\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.879570 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-utilities\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.879633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxsmw\" (UniqueName: \"kubernetes.io/projected/8d56d629-33f3-48af-b7f5-acc9cd1c206c-kube-api-access-kxsmw\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.879825 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-catalog-content\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.880348 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-catalog-content\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.880608 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-utilities\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.901827 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxsmw\" (UniqueName: \"kubernetes.io/projected/8d56d629-33f3-48af-b7f5-acc9cd1c206c-kube-api-access-kxsmw\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:11 crc kubenswrapper[4820]: I0221 09:32:11.001619 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:11 crc kubenswrapper[4820]: I0221 09:32:11.516133 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdpzw"] Feb 21 09:32:12 crc kubenswrapper[4820]: I0221 09:32:12.424446 4820 generic.go:334] "Generic (PLEG): container finished" podID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerID="b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4" exitCode=0 Feb 21 09:32:12 crc kubenswrapper[4820]: I0221 09:32:12.424514 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdpzw" event={"ID":"8d56d629-33f3-48af-b7f5-acc9cd1c206c","Type":"ContainerDied","Data":"b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4"} Feb 21 09:32:12 crc kubenswrapper[4820]: I0221 09:32:12.424690 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdpzw" event={"ID":"8d56d629-33f3-48af-b7f5-acc9cd1c206c","Type":"ContainerStarted","Data":"bd4acb78834dcfa9f90a8b8d7183d3227058b4942670ca06e6201e726bc94f40"} Feb 21 09:32:12 crc kubenswrapper[4820]: I0221 09:32:12.427878 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.263985 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-llzgs"] Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.267802 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.282552 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-llzgs"] Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.355224 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh2lr\" (UniqueName: \"kubernetes.io/projected/e14b6a60-a84b-48e4-8a49-82c31e29a67a-kube-api-access-hh2lr\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.355780 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-catalog-content\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.355909 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-utilities\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.436407 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdpzw" event={"ID":"8d56d629-33f3-48af-b7f5-acc9cd1c206c","Type":"ContainerStarted","Data":"bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58"} Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.457545 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2lr\" (UniqueName: \"kubernetes.io/projected/e14b6a60-a84b-48e4-8a49-82c31e29a67a-kube-api-access-hh2lr\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.457615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-catalog-content\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.457713 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-utilities\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.458184 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-catalog-content\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.458298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-utilities\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.478024 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh2lr\" (UniqueName: \"kubernetes.io/projected/e14b6a60-a84b-48e4-8a49-82c31e29a67a-kube-api-access-hh2lr\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.584160 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:14 crc kubenswrapper[4820]: I0221 09:32:14.063162 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-llzgs"] Feb 21 09:32:14 crc kubenswrapper[4820]: I0221 09:32:14.446623 4820 generic.go:334] "Generic (PLEG): container finished" podID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerID="40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266" exitCode=0 Feb 21 09:32:14 crc kubenswrapper[4820]: I0221 09:32:14.446667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llzgs" event={"ID":"e14b6a60-a84b-48e4-8a49-82c31e29a67a","Type":"ContainerDied","Data":"40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266"} Feb 21 09:32:14 crc kubenswrapper[4820]: I0221 09:32:14.447051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llzgs" event={"ID":"e14b6a60-a84b-48e4-8a49-82c31e29a67a","Type":"ContainerStarted","Data":"fce0bbbe9c0db17ad430e4a8439b4f17e67bbff84360a08e20f939148a9ce9fd"} Feb 21 09:32:15 crc kubenswrapper[4820]: I0221 09:32:15.471977 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llzgs" event={"ID":"e14b6a60-a84b-48e4-8a49-82c31e29a67a","Type":"ContainerStarted","Data":"7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20"} Feb 21 09:32:15 crc kubenswrapper[4820]: I0221 09:32:15.477794 4820 generic.go:334] "Generic (PLEG): container finished" podID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerID="bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58" exitCode=0 Feb 21 09:32:15 crc kubenswrapper[4820]: I0221 09:32:15.477858 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdpzw" event={"ID":"8d56d629-33f3-48af-b7f5-acc9cd1c206c","Type":"ContainerDied","Data":"bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58"} Feb 21 09:32:16 crc kubenswrapper[4820]: I0221 09:32:16.491511 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdpzw" event={"ID":"8d56d629-33f3-48af-b7f5-acc9cd1c206c","Type":"ContainerStarted","Data":"0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310"} Feb 21 09:32:16 crc kubenswrapper[4820]: I0221 09:32:16.494816 4820 generic.go:334] "Generic (PLEG): container finished" podID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerID="7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20" exitCode=0 Feb 21 09:32:16 crc kubenswrapper[4820]: I0221 09:32:16.494847 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llzgs" event={"ID":"e14b6a60-a84b-48e4-8a49-82c31e29a67a","Type":"ContainerDied","Data":"7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20"} Feb 21 09:32:16 crc kubenswrapper[4820]: I0221 09:32:16.519832 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cdpzw" podStartSLOduration=3.057871767 podStartE2EDuration="6.519813289s" podCreationTimestamp="2026-02-21 09:32:10 +0000 UTC" firstStartedPulling="2026-02-21 09:32:12.427549289 +0000 UTC m=+9907.460633487" lastFinishedPulling="2026-02-21 09:32:15.889490811 +0000 UTC m=+9910.922575009" observedRunningTime="2026-02-21 09:32:16.51765385 +0000 UTC m=+9911.550738048" watchObservedRunningTime="2026-02-21 09:32:16.519813289 +0000 UTC m=+9911.552897477" Feb 21 09:32:17 crc kubenswrapper[4820]: I0221 09:32:17.506842 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llzgs" event={"ID":"e14b6a60-a84b-48e4-8a49-82c31e29a67a","Type":"ContainerStarted","Data":"76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca"} Feb 21 09:32:17 crc kubenswrapper[4820]: I0221 09:32:17.527043 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-llzgs" podStartSLOduration=2.108247057 podStartE2EDuration="4.527028s" podCreationTimestamp="2026-02-21 09:32:13 +0000 UTC" firstStartedPulling="2026-02-21 09:32:14.448296983 +0000 UTC m=+9909.481381181" lastFinishedPulling="2026-02-21 09:32:16.867077926 +0000 UTC m=+9911.900162124" observedRunningTime="2026-02-21 09:32:17.524133211 +0000 UTC m=+9912.557217449" watchObservedRunningTime="2026-02-21 09:32:17.527028 +0000 UTC m=+9912.560112198" Feb 21 09:32:21 crc kubenswrapper[4820]: I0221 09:32:21.004203 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:21 crc kubenswrapper[4820]: I0221 09:32:21.004932 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:21 crc kubenswrapper[4820]: I0221 09:32:21.059033 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:21 crc kubenswrapper[4820]: I0221 09:32:21.594156 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:22 crc kubenswrapper[4820]: I0221 09:32:22.655039 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdpzw"] Feb 21 09:32:22 crc kubenswrapper[4820]: I0221 09:32:22.697063 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:32:23 crc kubenswrapper[4820]: I0221 09:32:23.565606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"bf473f3ebb1aa62725d356509d5177afb2616e52fd0366893704e3b355182740"} Feb 21 09:32:23 crc kubenswrapper[4820]: I0221 09:32:23.565742 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cdpzw" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="registry-server" containerID="cri-o://0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310" gracePeriod=2 Feb 21 09:32:23 crc kubenswrapper[4820]: I0221 09:32:23.585098 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:23 crc kubenswrapper[4820]: I0221 09:32:23.586196 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:23 crc kubenswrapper[4820]: I0221 09:32:23.663084 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.101691 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.207819 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxsmw\" (UniqueName: \"kubernetes.io/projected/8d56d629-33f3-48af-b7f5-acc9cd1c206c-kube-api-access-kxsmw\") pod \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.207921 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-catalog-content\") pod \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.208081 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-utilities\") pod \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.209380 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-utilities" (OuterVolumeSpecName: "utilities") pod "8d56d629-33f3-48af-b7f5-acc9cd1c206c" (UID: "8d56d629-33f3-48af-b7f5-acc9cd1c206c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.226313 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d56d629-33f3-48af-b7f5-acc9cd1c206c-kube-api-access-kxsmw" (OuterVolumeSpecName: "kube-api-access-kxsmw") pod "8d56d629-33f3-48af-b7f5-acc9cd1c206c" (UID: "8d56d629-33f3-48af-b7f5-acc9cd1c206c"). InnerVolumeSpecName "kube-api-access-kxsmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.311454 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.311497 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxsmw\" (UniqueName: \"kubernetes.io/projected/8d56d629-33f3-48af-b7f5-acc9cd1c206c-kube-api-access-kxsmw\") on node \"crc\" DevicePath \"\"" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.334829 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d56d629-33f3-48af-b7f5-acc9cd1c206c" (UID: "8d56d629-33f3-48af-b7f5-acc9cd1c206c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.413866 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.576631 4820 generic.go:334] "Generic (PLEG): container finished" podID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerID="0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310" exitCode=0 Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.577939 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.583185 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdpzw" event={"ID":"8d56d629-33f3-48af-b7f5-acc9cd1c206c","Type":"ContainerDied","Data":"0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310"} Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.583270 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdpzw" event={"ID":"8d56d629-33f3-48af-b7f5-acc9cd1c206c","Type":"ContainerDied","Data":"bd4acb78834dcfa9f90a8b8d7183d3227058b4942670ca06e6201e726bc94f40"} Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.583299 4820 scope.go:117] "RemoveContainer" containerID="0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.607075 4820 scope.go:117] "RemoveContainer" containerID="bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.623411 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdpzw"] Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.637292 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cdpzw"] Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.660415 4820 scope.go:117] "RemoveContainer" containerID="b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.682537 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.712730 4820 scope.go:117] "RemoveContainer" containerID="0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310" Feb 21 09:32:24 crc kubenswrapper[4820]: E0221 09:32:24.713831 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310\": container with ID starting with 0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310 not found: ID does not exist" containerID="0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.713859 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310"} err="failed to get container status \"0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310\": rpc error: code = NotFound desc = could not find container \"0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310\": container with ID starting with 0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310 not found: ID does not exist" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.713877 4820 scope.go:117] "RemoveContainer" containerID="bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58" Feb 21 09:32:24 crc kubenswrapper[4820]: E0221 09:32:24.714181 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58\": container with ID starting with bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58 not found: ID does not exist" containerID="bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.714198 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58"} err="failed to get container status \"bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58\": rpc error: code = NotFound desc = could not find container \"bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58\": container with ID starting with bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58 not found: ID does not exist" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.714210 4820 scope.go:117] "RemoveContainer" containerID="b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4" Feb 21 09:32:24 crc kubenswrapper[4820]: E0221 09:32:24.714665 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4\": container with ID starting with b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4 not found: ID does not exist" containerID="b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.714687 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4"} err="failed to get container status \"b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4\": rpc error: code = NotFound desc = could not find container \"b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4\": container with ID starting with b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4 not found: ID does not exist" Feb 21 09:32:25 crc kubenswrapper[4820]: I0221 09:32:25.709065 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" path="/var/lib/kubelet/pods/8d56d629-33f3-48af-b7f5-acc9cd1c206c/volumes" Feb 21 09:32:25 crc kubenswrapper[4820]: I0221 09:32:25.858412 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-llzgs"] Feb 21 09:32:27 crc kubenswrapper[4820]: I0221 09:32:27.610752 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-llzgs" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="registry-server" containerID="cri-o://76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca" gracePeriod=2 Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.161366 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.307003 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-utilities\") pod \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.307059 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh2lr\" (UniqueName: \"kubernetes.io/projected/e14b6a60-a84b-48e4-8a49-82c31e29a67a-kube-api-access-hh2lr\") pod \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.307260 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-catalog-content\") pod \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.308634 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-utilities" (OuterVolumeSpecName: "utilities") pod "e14b6a60-a84b-48e4-8a49-82c31e29a67a" (UID: "e14b6a60-a84b-48e4-8a49-82c31e29a67a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.333467 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e14b6a60-a84b-48e4-8a49-82c31e29a67a" (UID: "e14b6a60-a84b-48e4-8a49-82c31e29a67a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.409534 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.409566 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.654944 4820 generic.go:334] "Generic (PLEG): container finished" podID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerID="76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca" exitCode=0 Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.655090 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llzgs" event={"ID":"e14b6a60-a84b-48e4-8a49-82c31e29a67a","Type":"ContainerDied","Data":"76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca"} Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.655115 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llzgs" event={"ID":"e14b6a60-a84b-48e4-8a49-82c31e29a67a","Type":"ContainerDied","Data":"fce0bbbe9c0db17ad430e4a8439b4f17e67bbff84360a08e20f939148a9ce9fd"} Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.655131 4820 scope.go:117] "RemoveContainer" containerID="76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.655266 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.678210 4820 scope.go:117] "RemoveContainer" containerID="7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.796458 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14b6a60-a84b-48e4-8a49-82c31e29a67a-kube-api-access-hh2lr" (OuterVolumeSpecName: "kube-api-access-hh2lr") pod "e14b6a60-a84b-48e4-8a49-82c31e29a67a" (UID: "e14b6a60-a84b-48e4-8a49-82c31e29a67a"). InnerVolumeSpecName "kube-api-access-hh2lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.817396 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh2lr\" (UniqueName: \"kubernetes.io/projected/e14b6a60-a84b-48e4-8a49-82c31e29a67a-kube-api-access-hh2lr\") on node \"crc\" DevicePath \"\"" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.838373 4820 scope.go:117] "RemoveContainer" containerID="40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.931606 4820 scope.go:117] "RemoveContainer" containerID="76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca" Feb 21 09:32:28 crc kubenswrapper[4820]: E0221 09:32:28.932054 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca\": container with ID starting with 76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca not found: ID does not exist" containerID="76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.932087 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca"} err="failed to get container status \"76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca\": rpc error: code = NotFound desc = could not find container \"76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca\": container with ID starting with 76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca not found: ID does not exist" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.932107 4820 scope.go:117] "RemoveContainer" containerID="7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20" Feb 21 09:32:28 crc kubenswrapper[4820]: E0221 09:32:28.932384 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20\": container with ID starting with 7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20 not found: ID does not exist" containerID="7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.932430 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20"} err="failed to get container status \"7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20\": rpc error: code = NotFound desc = could not find container \"7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20\": container with ID starting with 7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20 not found: ID does not exist" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.932455 4820 scope.go:117] "RemoveContainer" containerID="40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266" Feb 21 09:32:28 crc kubenswrapper[4820]: E0221 09:32:28.932755 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266\": container with ID starting with 40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266 not found: ID does not exist" containerID="40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.932775 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266"} err="failed to get container status \"40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266\": rpc error: code = NotFound desc = could not find container \"40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266\": container with ID starting with 40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266 not found: ID does not exist" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.985147 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-llzgs"] Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.994744 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-llzgs"] Feb 21 09:32:29 crc kubenswrapper[4820]: I0221 09:32:29.710967 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" path="/var/lib/kubelet/pods/e14b6a60-a84b-48e4-8a49-82c31e29a67a/volumes" Feb 21 09:33:38 crc kubenswrapper[4820]: I0221 09:33:38.295880 4820 scope.go:117] "RemoveContainer" containerID="8a5a16f3ed4eb8c982e9fa5dc44916a814bbff0058422a3d82bec9581623f831" Feb 21 09:34:00 crc kubenswrapper[4820]: I0221 09:34:00.713824 4820 generic.go:334] "Generic (PLEG): container finished" podID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerID="4bcf628d58a10209d2b868562aaef36528866f8053f65575b0ffa6aa1295907f" exitCode=0 Feb 21 09:34:00 crc kubenswrapper[4820]: I0221 09:34:00.713909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" event={"ID":"a1900ff3-6f36-49ff-88d2-898da25c3385","Type":"ContainerDied","Data":"4bcf628d58a10209d2b868562aaef36528866f8053f65575b0ffa6aa1295907f"} Feb 21 09:34:00 crc kubenswrapper[4820]: I0221 09:34:00.714789 4820 scope.go:117] "RemoveContainer" containerID="4bcf628d58a10209d2b868562aaef36528866f8053f65575b0ffa6aa1295907f" Feb 21 09:34:01 crc kubenswrapper[4820]: I0221 09:34:01.539093 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-j7d4d_must-gather-qvcwh_a1900ff3-6f36-49ff-88d2-898da25c3385/gather/0.log" Feb 21 09:34:10 crc kubenswrapper[4820]: I0221 09:34:10.318505 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j7d4d/must-gather-qvcwh"] Feb 21 09:34:10 crc kubenswrapper[4820]: I0221 09:34:10.319313 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerName="copy" containerID="cri-o://5b05abff72f7a6214e6796b0e752b52807cb2023e6916ddd56daecc4e8528351" gracePeriod=2 Feb 21 09:34:10 crc kubenswrapper[4820]: I0221 09:34:10.330206 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j7d4d/must-gather-qvcwh"] Feb 21 09:34:10 crc kubenswrapper[4820]: I0221 09:34:10.819478 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-j7d4d_must-gather-qvcwh_a1900ff3-6f36-49ff-88d2-898da25c3385/copy/0.log" Feb 21 09:34:10 crc kubenswrapper[4820]: I0221 09:34:10.820122 4820 generic.go:334] "Generic (PLEG): container finished" podID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerID="5b05abff72f7a6214e6796b0e752b52807cb2023e6916ddd56daecc4e8528351" exitCode=143 Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.318745 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-j7d4d_must-gather-qvcwh_a1900ff3-6f36-49ff-88d2-898da25c3385/copy/0.log" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.319303 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.427298 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq9cg\" (UniqueName: \"kubernetes.io/projected/a1900ff3-6f36-49ff-88d2-898da25c3385-kube-api-access-gq9cg\") pod \"a1900ff3-6f36-49ff-88d2-898da25c3385\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.427492 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a1900ff3-6f36-49ff-88d2-898da25c3385-must-gather-output\") pod \"a1900ff3-6f36-49ff-88d2-898da25c3385\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.432912 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1900ff3-6f36-49ff-88d2-898da25c3385-kube-api-access-gq9cg" (OuterVolumeSpecName: "kube-api-access-gq9cg") pod "a1900ff3-6f36-49ff-88d2-898da25c3385" (UID: "a1900ff3-6f36-49ff-88d2-898da25c3385"). InnerVolumeSpecName "kube-api-access-gq9cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.529449 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq9cg\" (UniqueName: \"kubernetes.io/projected/a1900ff3-6f36-49ff-88d2-898da25c3385-kube-api-access-gq9cg\") on node \"crc\" DevicePath \"\"" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.624930 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1900ff3-6f36-49ff-88d2-898da25c3385-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a1900ff3-6f36-49ff-88d2-898da25c3385" (UID: "a1900ff3-6f36-49ff-88d2-898da25c3385"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.631418 4820 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a1900ff3-6f36-49ff-88d2-898da25c3385-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.712267 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" path="/var/lib/kubelet/pods/a1900ff3-6f36-49ff-88d2-898da25c3385/volumes" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.833549 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-j7d4d_must-gather-qvcwh_a1900ff3-6f36-49ff-88d2-898da25c3385/copy/0.log" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.834046 4820 scope.go:117] "RemoveContainer" containerID="5b05abff72f7a6214e6796b0e752b52807cb2023e6916ddd56daecc4e8528351" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.834156 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.859835 4820 scope.go:117] "RemoveContainer" containerID="4bcf628d58a10209d2b868562aaef36528866f8053f65575b0ffa6aa1295907f" Feb 21 09:34:43 crc kubenswrapper[4820]: I0221 09:34:43.815848 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:34:43 crc kubenswrapper[4820]: I0221 09:34:43.816402 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:35:13 crc kubenswrapper[4820]: I0221 09:35:13.816632 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:35:13 crc kubenswrapper[4820]: I0221 09:35:13.817369 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:35:43 crc kubenswrapper[4820]: I0221 09:35:43.815779 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:35:43 crc kubenswrapper[4820]: I0221 09:35:43.816560 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:35:43 crc kubenswrapper[4820]: I0221 09:35:43.816625 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 09:35:43 crc kubenswrapper[4820]: I0221 09:35:43.817688 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf473f3ebb1aa62725d356509d5177afb2616e52fd0366893704e3b355182740"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 09:35:43 crc kubenswrapper[4820]: I0221 09:35:43.817759 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://bf473f3ebb1aa62725d356509d5177afb2616e52fd0366893704e3b355182740" gracePeriod=600 Feb 21 09:35:44 crc kubenswrapper[4820]: E0221 09:35:44.014927 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce38546e_524f_4801_8ee1_b4bb9d6c6dff.slice/crio-bf473f3ebb1aa62725d356509d5177afb2616e52fd0366893704e3b355182740.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce38546e_524f_4801_8ee1_b4bb9d6c6dff.slice/crio-conmon-bf473f3ebb1aa62725d356509d5177afb2616e52fd0366893704e3b355182740.scope\": RecentStats: unable to find data in memory cache]" Feb 21 09:35:44 crc kubenswrapper[4820]: I0221 09:35:44.847505 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="bf473f3ebb1aa62725d356509d5177afb2616e52fd0366893704e3b355182740" exitCode=0 Feb 21 09:35:44 crc kubenswrapper[4820]: I0221 09:35:44.847611 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"bf473f3ebb1aa62725d356509d5177afb2616e52fd0366893704e3b355182740"} Feb 21 09:35:44 crc kubenswrapper[4820]: I0221 09:35:44.848513 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"3feb72bbf479b271fbf4fa3fe95eeb9d443bbe28e301842a89237a175e700cac"} Feb 21 09:35:44 crc kubenswrapper[4820]: I0221 09:35:44.848599 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.131486 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rgdbd"] Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133372 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerName="copy" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133393 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerName="copy" Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133406 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="extract-content" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133413 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="extract-content" Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133432 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="extract-utilities" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133440 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="extract-utilities" Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133466 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="registry-server" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133474 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="registry-server" Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133485 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerName="gather" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133493 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerName="gather" Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133509 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="extract-utilities" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133517 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="extract-utilities" Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133533 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="extract-content" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133541 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="extract-content" Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133554 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="registry-server" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133561 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="registry-server" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133811 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerName="copy" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133825 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerName="gather" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133849 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="registry-server" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133860 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="registry-server" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.135650 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.167162 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rgdbd"] Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.296885 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-catalog-content\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.296951 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-utilities\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.297170 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjp92\" (UniqueName: \"kubernetes.io/projected/fc01af99-1ad2-4dea-a60d-2b37377ccd46-kube-api-access-qjp92\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.399619 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjp92\" (UniqueName: \"kubernetes.io/projected/fc01af99-1ad2-4dea-a60d-2b37377ccd46-kube-api-access-qjp92\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.399787 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-catalog-content\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.399819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-utilities\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.400499 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-utilities\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.400499 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-catalog-content\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.425071 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjp92\" (UniqueName: \"kubernetes.io/projected/fc01af99-1ad2-4dea-a60d-2b37377ccd46-kube-api-access-qjp92\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.463764 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.910192 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rgdbd"] Feb 21 09:37:06 crc kubenswrapper[4820]: I0221 09:37:06.741855 4820 generic.go:334] "Generic (PLEG): container finished" podID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerID="8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d" exitCode=0 Feb 21 09:37:06 crc kubenswrapper[4820]: I0221 09:37:06.742321 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgdbd" event={"ID":"fc01af99-1ad2-4dea-a60d-2b37377ccd46","Type":"ContainerDied","Data":"8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d"} Feb 21 09:37:06 crc kubenswrapper[4820]: I0221 09:37:06.742384 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgdbd" event={"ID":"fc01af99-1ad2-4dea-a60d-2b37377ccd46","Type":"ContainerStarted","Data":"aad725d2c0c1a23529c6dda5056ee5ab45cdedb96213744c47c043cffbee6a9f"} Feb 21 09:37:07 crc kubenswrapper[4820]: I0221 09:37:07.752223 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgdbd" event={"ID":"fc01af99-1ad2-4dea-a60d-2b37377ccd46","Type":"ContainerStarted","Data":"01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130"} Feb 21 09:37:10 crc kubenswrapper[4820]: I0221 09:37:10.785097 4820 generic.go:334] "Generic (PLEG): container finished" podID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerID="01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130" exitCode=0 Feb 21 09:37:10 crc kubenswrapper[4820]: I0221 09:37:10.785598 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgdbd" event={"ID":"fc01af99-1ad2-4dea-a60d-2b37377ccd46","Type":"ContainerDied","Data":"01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130"} Feb 21 09:37:11 crc kubenswrapper[4820]: I0221 09:37:11.814215 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgdbd" event={"ID":"fc01af99-1ad2-4dea-a60d-2b37377ccd46","Type":"ContainerStarted","Data":"a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447"} Feb 21 09:37:11 crc kubenswrapper[4820]: I0221 09:37:11.844529 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rgdbd" podStartSLOduration=2.338540374 podStartE2EDuration="6.84451379s" podCreationTimestamp="2026-02-21 09:37:05 +0000 UTC" firstStartedPulling="2026-02-21 09:37:06.747794187 +0000 UTC m=+10201.780878385" lastFinishedPulling="2026-02-21 09:37:11.253767593 +0000 UTC m=+10206.286851801" observedRunningTime="2026-02-21 09:37:11.836075 +0000 UTC m=+10206.869159208" watchObservedRunningTime="2026-02-21 09:37:11.84451379 +0000 UTC m=+10206.877597978" Feb 21 09:37:15 crc kubenswrapper[4820]: I0221 09:37:15.464213 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:15 crc kubenswrapper[4820]: I0221 09:37:15.465156 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:16 crc kubenswrapper[4820]: I0221 09:37:16.520480 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rgdbd" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="registry-server" probeResult="failure" output=< Feb 21 09:37:16 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:37:16 crc kubenswrapper[4820]: > Feb 21 09:37:25 crc kubenswrapper[4820]: I0221 09:37:25.525857 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:25 crc kubenswrapper[4820]: I0221 09:37:25.610791 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:25 crc kubenswrapper[4820]: I0221 09:37:25.776186 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rgdbd"] Feb 21 09:37:26 crc kubenswrapper[4820]: I0221 09:37:26.984430 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rgdbd" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="registry-server" containerID="cri-o://a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447" gracePeriod=2 Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.494844 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.630188 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-catalog-content\") pod \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.630290 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjp92\" (UniqueName: \"kubernetes.io/projected/fc01af99-1ad2-4dea-a60d-2b37377ccd46-kube-api-access-qjp92\") pod \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.630440 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-utilities\") pod \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.631259 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-utilities" (OuterVolumeSpecName: "utilities") pod "fc01af99-1ad2-4dea-a60d-2b37377ccd46" (UID: "fc01af99-1ad2-4dea-a60d-2b37377ccd46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.635424 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc01af99-1ad2-4dea-a60d-2b37377ccd46-kube-api-access-qjp92" (OuterVolumeSpecName: "kube-api-access-qjp92") pod "fc01af99-1ad2-4dea-a60d-2b37377ccd46" (UID: "fc01af99-1ad2-4dea-a60d-2b37377ccd46"). InnerVolumeSpecName "kube-api-access-qjp92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.733369 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjp92\" (UniqueName: \"kubernetes.io/projected/fc01af99-1ad2-4dea-a60d-2b37377ccd46-kube-api-access-qjp92\") on node \"crc\" DevicePath \"\"" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.733405 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.775409 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc01af99-1ad2-4dea-a60d-2b37377ccd46" (UID: "fc01af99-1ad2-4dea-a60d-2b37377ccd46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.835017 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.996149 4820 generic.go:334] "Generic (PLEG): container finished" podID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerID="a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447" exitCode=0 Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.996216 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgdbd" event={"ID":"fc01af99-1ad2-4dea-a60d-2b37377ccd46","Type":"ContainerDied","Data":"a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447"} Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.996268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgdbd" event={"ID":"fc01af99-1ad2-4dea-a60d-2b37377ccd46","Type":"ContainerDied","Data":"aad725d2c0c1a23529c6dda5056ee5ab45cdedb96213744c47c043cffbee6a9f"} Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.996286 4820 scope.go:117] "RemoveContainer" containerID="a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.996215 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.021334 4820 scope.go:117] "RemoveContainer" containerID="01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.039495 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rgdbd"] Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.047602 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rgdbd"] Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.074444 4820 scope.go:117] "RemoveContainer" containerID="8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.096639 4820 scope.go:117] "RemoveContainer" containerID="a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447" Feb 21 09:37:28 crc kubenswrapper[4820]: E0221 09:37:28.097107 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447\": container with ID starting with a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447 not found: ID does not exist" containerID="a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.097149 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447"} err="failed to get container status \"a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447\": rpc error: code = NotFound desc = could not find container \"a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447\": container with ID starting with a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447 not found: ID does not exist" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.097181 4820 scope.go:117] "RemoveContainer" containerID="01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130" Feb 21 09:37:28 crc kubenswrapper[4820]: E0221 09:37:28.097927 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130\": container with ID starting with 01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130 not found: ID does not exist" containerID="01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.097949 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130"} err="failed to get container status \"01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130\": rpc error: code = NotFound desc = could not find container \"01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130\": container with ID starting with 01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130 not found: ID does not exist" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.097966 4820 scope.go:117] "RemoveContainer" containerID="8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d" Feb 21 09:37:28 crc kubenswrapper[4820]: E0221 09:37:28.098236 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d\": container with ID starting with 8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d not found: ID does not exist" containerID="8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.098286 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d"} err="failed to get container status \"8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d\": rpc error: code = NotFound desc = could not find container \"8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d\": container with ID starting with 8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d not found: ID does not exist" Feb 21 09:37:29 crc kubenswrapper[4820]: I0221 09:37:29.711489 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" path="/var/lib/kubelet/pods/fc01af99-1ad2-4dea-a60d-2b37377ccd46/volumes" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.527168 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q2vvg"] Feb 21 09:37:39 crc kubenswrapper[4820]: E0221 09:37:39.528210 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="extract-content" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.528228 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="extract-content" Feb 21 09:37:39 crc kubenswrapper[4820]: E0221 09:37:39.528277 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="registry-server" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.528283 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="registry-server" Feb 21 09:37:39 crc kubenswrapper[4820]: E0221 09:37:39.528297 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="extract-utilities" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.528305 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="extract-utilities" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.528561 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="registry-server" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.530512 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.552632 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2vvg"] Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.633872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57fhh\" (UniqueName: \"kubernetes.io/projected/b861e2e2-bab2-43e3-ac35-db7964e69058-kube-api-access-57fhh\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.633974 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-catalog-content\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.634022 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-utilities\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.736539 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57fhh\" (UniqueName: \"kubernetes.io/projected/b861e2e2-bab2-43e3-ac35-db7964e69058-kube-api-access-57fhh\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.737894 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-catalog-content\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.738013 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-utilities\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.738654 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-catalog-content\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.738939 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-utilities\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.765583 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57fhh\" (UniqueName: \"kubernetes.io/projected/b861e2e2-bab2-43e3-ac35-db7964e69058-kube-api-access-57fhh\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.873074 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:40 crc kubenswrapper[4820]: I0221 09:37:40.399048 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2vvg"] Feb 21 09:37:40 crc kubenswrapper[4820]: W0221 09:37:40.405593 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb861e2e2_bab2_43e3_ac35_db7964e69058.slice/crio-fabce4ab60996ed1cd1f12b28fa942f0d4e3f94c17a02564aa396320f5e9c6f1 WatchSource:0}: Error finding container fabce4ab60996ed1cd1f12b28fa942f0d4e3f94c17a02564aa396320f5e9c6f1: Status 404 returned error can't find the container with id fabce4ab60996ed1cd1f12b28fa942f0d4e3f94c17a02564aa396320f5e9c6f1 Feb 21 09:37:41 crc kubenswrapper[4820]: I0221 09:37:41.118990 4820 generic.go:334] "Generic (PLEG): container finished" podID="b861e2e2-bab2-43e3-ac35-db7964e69058" containerID="dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7" exitCode=0 Feb 21 09:37:41 crc kubenswrapper[4820]: I0221 09:37:41.119062 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2vvg" event={"ID":"b861e2e2-bab2-43e3-ac35-db7964e69058","Type":"ContainerDied","Data":"dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7"} Feb 21 09:37:41 crc kubenswrapper[4820]: I0221 09:37:41.119322 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2vvg" event={"ID":"b861e2e2-bab2-43e3-ac35-db7964e69058","Type":"ContainerStarted","Data":"fabce4ab60996ed1cd1f12b28fa942f0d4e3f94c17a02564aa396320f5e9c6f1"} Feb 21 09:37:41 crc kubenswrapper[4820]: I0221 09:37:41.122160 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 09:37:42 crc kubenswrapper[4820]: I0221 09:37:42.130431 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2vvg" event={"ID":"b861e2e2-bab2-43e3-ac35-db7964e69058","Type":"ContainerStarted","Data":"a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86"} Feb 21 09:37:43 crc kubenswrapper[4820]: I0221 09:37:43.146846 4820 generic.go:334] "Generic (PLEG): container finished" podID="b861e2e2-bab2-43e3-ac35-db7964e69058" containerID="a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86" exitCode=0 Feb 21 09:37:43 crc kubenswrapper[4820]: I0221 09:37:43.146946 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2vvg" event={"ID":"b861e2e2-bab2-43e3-ac35-db7964e69058","Type":"ContainerDied","Data":"a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86"} Feb 21 09:37:44 crc kubenswrapper[4820]: I0221 09:37:44.157725 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2vvg" event={"ID":"b861e2e2-bab2-43e3-ac35-db7964e69058","Type":"ContainerStarted","Data":"4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855"} Feb 21 09:37:44 crc kubenswrapper[4820]: I0221 09:37:44.187338 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q2vvg" podStartSLOduration=2.7691615499999997 podStartE2EDuration="5.187319317s" podCreationTimestamp="2026-02-21 09:37:39 +0000 UTC" firstStartedPulling="2026-02-21 09:37:41.121822957 +0000 UTC m=+10236.154907155" lastFinishedPulling="2026-02-21 09:37:43.539980724 +0000 UTC m=+10238.573064922" observedRunningTime="2026-02-21 09:37:44.181125347 +0000 UTC m=+10239.214209545" watchObservedRunningTime="2026-02-21 09:37:44.187319317 +0000 UTC m=+10239.220403515" Feb 21 09:37:49 crc kubenswrapper[4820]: I0221 09:37:49.874291 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:49 crc kubenswrapper[4820]: I0221 09:37:49.874656 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:49 crc kubenswrapper[4820]: I0221 09:37:49.926747 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:50 crc kubenswrapper[4820]: I0221 09:37:50.266806 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:50 crc kubenswrapper[4820]: I0221 09:37:50.311091 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q2vvg"] Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.243611 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q2vvg" podUID="b861e2e2-bab2-43e3-ac35-db7964e69058" containerName="registry-server" containerID="cri-o://4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855" gracePeriod=2 Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.744801 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.928870 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-catalog-content\") pod \"b861e2e2-bab2-43e3-ac35-db7964e69058\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.939193 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-utilities\") pod \"b861e2e2-bab2-43e3-ac35-db7964e69058\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.939673 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57fhh\" (UniqueName: \"kubernetes.io/projected/b861e2e2-bab2-43e3-ac35-db7964e69058-kube-api-access-57fhh\") pod \"b861e2e2-bab2-43e3-ac35-db7964e69058\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.940428 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-utilities" (OuterVolumeSpecName: "utilities") pod "b861e2e2-bab2-43e3-ac35-db7964e69058" (UID: "b861e2e2-bab2-43e3-ac35-db7964e69058"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.940827 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.946619 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b861e2e2-bab2-43e3-ac35-db7964e69058-kube-api-access-57fhh" (OuterVolumeSpecName: "kube-api-access-57fhh") pod "b861e2e2-bab2-43e3-ac35-db7964e69058" (UID: "b861e2e2-bab2-43e3-ac35-db7964e69058"). InnerVolumeSpecName "kube-api-access-57fhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.998090 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b861e2e2-bab2-43e3-ac35-db7964e69058" (UID: "b861e2e2-bab2-43e3-ac35-db7964e69058"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.043179 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57fhh\" (UniqueName: \"kubernetes.io/projected/b861e2e2-bab2-43e3-ac35-db7964e69058-kube-api-access-57fhh\") on node \"crc\" DevicePath \"\"" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.043226 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.255511 4820 generic.go:334] "Generic (PLEG): container finished" podID="b861e2e2-bab2-43e3-ac35-db7964e69058" containerID="4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855" exitCode=0 Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.255576 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2vvg" event={"ID":"b861e2e2-bab2-43e3-ac35-db7964e69058","Type":"ContainerDied","Data":"4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855"} Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.255621 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.255665 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2vvg" event={"ID":"b861e2e2-bab2-43e3-ac35-db7964e69058","Type":"ContainerDied","Data":"fabce4ab60996ed1cd1f12b28fa942f0d4e3f94c17a02564aa396320f5e9c6f1"} Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.255693 4820 scope.go:117] "RemoveContainer" containerID="4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.278599 4820 scope.go:117] "RemoveContainer" containerID="a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.310051 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q2vvg"] Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.320571 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q2vvg"] Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.322473 4820 scope.go:117] "RemoveContainer" containerID="dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.354096 4820 scope.go:117] "RemoveContainer" containerID="4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855" Feb 21 09:37:53 crc kubenswrapper[4820]: E0221 09:37:53.354489 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855\": container with ID starting with 4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855 not found: ID does not exist" containerID="4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.354528 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855"} err="failed to get container status \"4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855\": rpc error: code = NotFound desc = could not find container \"4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855\": container with ID starting with 4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855 not found: ID does not exist" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.354568 4820 scope.go:117] "RemoveContainer" containerID="a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86" Feb 21 09:37:53 crc kubenswrapper[4820]: E0221 09:37:53.354857 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86\": container with ID starting with a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86 not found: ID does not exist" containerID="a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.354906 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86"} err="failed to get container status \"a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86\": rpc error: code = NotFound desc = could not find container \"a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86\": container with ID starting with a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86 not found: ID does not exist" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.354939 4820 scope.go:117] "RemoveContainer" containerID="dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7" Feb 21 09:37:53 crc kubenswrapper[4820]: E0221 09:37:53.355411 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7\": container with ID starting with dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7 not found: ID does not exist" containerID="dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.355475 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7"} err="failed to get container status \"dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7\": rpc error: code = NotFound desc = could not find container \"dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7\": container with ID starting with dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7 not found: ID does not exist" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.713324 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b861e2e2-bab2-43e3-ac35-db7964e69058" path="/var/lib/kubelet/pods/b861e2e2-bab2-43e3-ac35-db7964e69058/volumes" Feb 21 09:38:13 crc kubenswrapper[4820]: I0221 09:38:13.815921 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:38:13 crc kubenswrapper[4820]: I0221 09:38:13.816398 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:38:43 crc kubenswrapper[4820]: I0221 09:38:43.815931 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:38:43 crc kubenswrapper[4820]: I0221 09:38:43.817384 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"